In my experience discussions of AI and how it might be useful are either so general or so technical to be useless. In this post, I will provide some historical and situational context to lay a foundation for real-world discussions of AI and how it might apply to meet your needs.
To understand how AI is likely to impact us, it is important to more generally understand the evolution of computer assistance in human activity. And, I decided to use the word “assistance” very deliberately because that is what computers done have and will continue to do. They like other mechanical tools amplify our ability to do things that challenge our capabilities much like a telescope (observing objects at distance) or crane (lifting heavy items) does. The computer has developed over four main stages of assistance:
- Calculating
- Processing Transactions and Storing Information
- Facilitating Collaboration
- Identifying and Using Patterns
The first programable “general purpose” electronic computer was called ENIAC. It was designed to calculate artillery range tables. Computers still do calculations, think about the ubiquitous spreadsheet or the mortgage payment calculator you can find online.
The second generation of general-purpose computers (technically software) were (was) designed to process transactions and store (and retrieve) information. Most modern businesses (like AT&T or Best Buy or Amazon) could not have scaled without computers that were necessary to track customers and their orders, produce bills and process payments at a scale that would have simply been impossible without computers. I note that the Internet both the World Wide Web (an early integration) and the Internet of Things (IoT) (the latest iteration) are examples of this kind of processing. Think about it.
The third-generation computers (more correctly the software on computers, powered by increasingly powerful computers) have developed to be able to facilitate collaboration. This class of assistance includes a variety of implementations that include:
- Cell Phones which, even when you use them strictly for voice are computer-based devices;
- Internet Enabled Applications—everything from Skype (video and voice communications) to email, and Textura’s CPM product;
- Productivity software like Microsoft Office or Googles web-based versions of Word, Excel and PowerPoint.
These all facilitate collaboration between individuals and improve productivity. Some like Skype or Oracle’s Textura CPM application do it by reducing the friction associated with business activities like meetings or paying/receiving payments. Others like cell phones promote efficiency by making communication and the associated collaboration more accessible.
The fourth generation is powered by pattern matching. Pattern matching includes being able to identify potentially fraudulent transactions in a database of credit card transactions or patterns in bit maps (pictures) looking for faces or responses to stimuli (sensor data) collected by an autonomous vehicle and used to safely navigate the vehicle through traffic from point A to point B.
I hope that it is clear that the pattern matching example is where AI comes into its own. Most definitions of the applications of AI include:
- Perception
- Reasoning
- Problem Solving
- Learning
- Natural Language Processing
This has changed over time and the items listed above are generally compound processes that include both “primitive” AI functions like pattern matching and other possibly procedural processing.
More specifically, these activities become useful when understood in the context of an OODA (Observe, Orient, Decide, Act— https://en.wikipedia.org/wiki/OODA_loop ) Loop. The OODA Loop is a concept defined by the US Air Force to help its pilots have a context for making high quality decisions. Each of the steps in the OODA Loop can be defined in the context of pattern matching:
- Observe—to take action the first thing that one needs to do is to detect and identify the objects (whether it be other vehicles on the road, or incoming rockets) that you are going to have to deal with. This is fundamentally a pattern matching exercise.
- Orient—it is important to set the context for a decision. An AI engine trying to determine whether a transaction is fraudulent should be checking to see if it matches criteria that, in the past, indicates fraud. It turns out that reasonably small purchases in big box stores in rural areas fit that pattern and trigger the credit card vendors to put a hold on the card and verify it is legitimate.
- Decide—once again given the context provided by the orient stage, what are the options available/allowable to deal with the situation and which one results in the best outcome. Another pattern matching exercise.
- Act—this one is the only one that I would argue is best handled by the old if/then/else logic like that used in transaction processing. But, no one has said that different styles of computing cannot mix to generate the desired outcome.
I would argue (especially in the short to medium term) that AI is going to really excel in the observe (especially where large quantities of data are involved) and orient (where quick results are valuable) are going to be the real standouts in the AI world. There are some cases, like fraud detection, where the machine can’t be beat. There are two reasons for this:
- There is a lot of data that can be used to train a machine to detect the patterns that indicate fraud. So, the machine can be “smart” about what conditions suggest fraud; and
- Credit card processing is a high velocity, high volume environment. Lots of decisions need to be made quickly. Something computer do well.
But, there is another very important role for AI to play in pattern matching. Business leaders often have hunches or intuitions. These intuitions can be tested and the hunch can either be validated or disproved. I did this at Textura when I proposed that Textura data could be a good predictor of not just construction starts, but the U.S. economy as a whole. I approached the Chicago Federal Reserve Bank and asked them if they would be interested in testing this hypothesis. They were and they did. The results are summarized in a paper: “Using Private Sector “Big Data” as an Economic Indicator: The Case of Construction Spending” by Daniel Aaronson, Scott Brave and Ross Cole— https://www.chicagofed.org/publications/chicago-fed-letter/2016/366. This kind of work has been done for years.
With AI, you can use the technology to sort through large amounts of data to look for patterns that might not be intuitive or obvious. Once uncovered, a human can review and assess the viability of the pattern as a source of revenue. Suppose we have performance data for a large consulting practice and we have metrics that we use measure success. But, we don’t really understand what drives success. We have always assumed that it was the partner running the engagement. And, that might generally be the case. But, we can use AI to see if there are other factors that correlate with success. For instance, a review of the data might show a pattern that suggests that the number of years of experience for the Senior Manager on the project correlates with success. The point is that there are so many variables that you might test, that it is not practical to test them all. With AI you don’t have to, because the technology can find a very finite number of candidate patterns and you can examine them to see if they suggest strategies that can be monetized.
What I am suggesting is that AI is another very powerful tool that can be applied to identify patterns in data that can be used to make business decisions. This is:
- Very different from the self-driving car or virtual assistant examples of AI—which I would observe conform to the OODA Loop paradigm; and
- An extension of the discipline of data science (using powerful AI tools) which I would suggest hasn’t generally lived up to its potential and should: 1) be taken seriously; and 2) incorporate AI into its toolkit as quickly as possible.
I close by noting that each generation of automation has promised to replace workers…. That hasn’t happened and won’t this time around because there are things that humans do that machines really can’t. Just ask Elon Musk.
In situations where you carefully pair the power of the machine with the unique capabilities of the person you have a powerful combination. That is why I started this post suggesting that AI, like the computer software paradigms that preceded it, would assist rather than replace humans. This does not mean that there won’t be dislocation. AI will:
- Make its users more productive and reduce the number of people required to do a job. The equity desk at Goldman Sachs went from over 600 traders to just 3 today;
- Make other opportunities. Goldman Sachs now employs 200 computer engineers to support the applications used support equity trading. – MIT Technology Review https://www.technologyreview.com/s/603431/as-goldman-embraces-automation-even-the-masters-of-the-universe-are-threatened/
So, things are going to change—as they have during previous automation waves. People will need to be better (with the right knowledge) educated to play the new roles that the implementation of AI will drive. But, just like with the introduction of new technologies from time immemorial, the technology will make people more productive and free them up (provided we prepare them to do so) to do better things.
Copyright 2018 Howard Niden
— you can find this (days earlier) and other posts at www.niden.com
and if you like this post: 1) please let me know; and 2) pass on your “find” to others.
This is brilliant work.