Before we begin talking about the future of what artificial intelligence will be capable of, we should address some of the most prominent fears associated with AI.
The idea that machines will become too smart, overtaking humans and destroying the world as we know it is one that is popular in Hollywood but less realistic than pop culture would make it seem. John Laird, an engineering professor at the University of Michigan, is not overly concerned: “I definitely don’t see the scenario where something wakes up and decides it wants to take over the world. I think that’s science fiction and not the way it’s going to play out.” He is, however, worried about humans taking advantage of ever-improving AI and using it to further their own villainous interests. White-collar crimes especially, such as fraud, embezzlement, insider trading, racketeering, and forgery, could be made infinitely easier with a machine that constantly learns the best ways to get as much money as quickly and quietly as possible.
These kinds of outcomes in AI development are why it is perhaps important that such groundbreaking evolution moves slowly. As we make incremental increases in our AI capacity, we can slowly develop guidelines and countermeasures to make sure its power remains regulated and under control. Creating security measures to ensure the technology is kept in check must be given its proper diligence, or else the nightmares that some people have in mind when considering AI may not be too far from the truth.
The benefits of AI are far-reaching and touch upon a wide spectrum of branches of a company’s operations. On the manufacturing side, machines can learn how to perform tasks that are repetitive but to this point have required a human’s basic ability to quickly identify an object and decide where that object should go. But image processing allows machines to, for example, recognize an object at a glance and sort it into the correct group for packaging or constructing; it would also teach machines how to scan and read human handwriting, which could assist in the sorting of mail en masse.
In this way, we can use AI and machine learning to remove the onus of performing basic duties from humans, who still outflank computers in fields such as creativity and problems that cannot be easily structured, so that the human skillset can be put to better use. Tedious tasks can already be easily done by machines, and even the machine learning process operates autonomously, so once you’ve tested your program and have it in operation, there is no need to babysit it, simply let it run and deliver its findings on its own time. Financial institutions in particular have taken great advantage of these enhancements in productivity, allowing their employees to step away from logging transactions and spend more time in improving the customer experience and seeking out bigger long-term engagements.
Another huge boon to businesses of giving computers more responsibility within the business model is that computers make no mistakes. Natural human error constantly presents roadblocks and inhibits the road to progress, and many companies factor it in from the very beginning in their projections, instantly lowering the ceiling of what they can achieve. Even just turning over basic tasks to computers can increase a company’s profitability, and when you factor in the more advanced machine learning techniques that can completely rejuvenate a business model, the potential ROI on machine learning technology goes through the roof.
What companies must be wary of, however, when factoring out the possibility of human error is the exact programming of their machine learning models. AI must be tested repeatedly, in different situations and with different data, to ensure that it is working as intended. To put a machine learning solution into practice that isn’t 100% accurate is basically making operations decisions by flipping a coin. This cannot be emphasized enough, testing and re-testing machine learning programs is crucial to the eventual success of that investment.
But as mentioned earlier, the key to AI and machine learning is its ability to learn over time through experience and through the processing of data. Constantly improving algorithms will drive growth faster than ever before. As well, these machines can handle all different types of data, multi-dimensional data, and in dynamic or uncertain environments. This ability to perform under different circumstances is another major difference from the AI of just ten years ago.
The power of machine learning is becoming clearer to the general public every day, right before their eyes, without many even realizing what they’re seeing. The virtual voice assistants in smartphones or smart homes are driven very strongly through machine learning, and more specifically through natural language processing. Siri, Alexa, and Cortana are all built to recognize speech and give an accurate response, but how does that happen? The machine recognizes speech, usually activated by a trigger phrase (“Hey Siri”), and then, using the voice recognition technology, synthesizes what it hears into a series of numbers that it uses to understand what the person said and prepares a response accordingly.
But while those voice assistants are the most obvious instances of machine learning in our phones, they aren’t the only instances; modern smart phones are actually some of the most machine-learning-rich devices in the world. For example, one of Apple’s biggest advertising campaigns is the improvements made to the iPhone camera, such blurring out the background of an image, digitally enhancing your image, and even zooming out and extending beyond just what was captured in the image, having your phone fill in on the boundaries. It can detect where the person or main objects of the picture are using image processing and single out the defining characteristics. Even the face unlock feature of some newer phones is driven by AI, although facial recognition is hardly limited to cell phones; everyone from Facebook to the government has found a use for such applied science.