- Quantum Computing and Blockchain in Business
- Arunkumar Krishnakumar
- 1646字
- 2021-06-30 15:06:25
Artificial intelligence
I mentioned AI as if it was actually developed for the first time after the social media explosion. Nothing could be further from the truth; AI has been conceptually around for a long time. The concept of robots behaving like humans was introduced in science fiction in the early 20th century. Yet, it only started to become a serious field of research from 1950, when Alan Turing posed the question,
"Can Machines Think?"
Origins of AI
As Alan Turing started exploring that question, he came against not only mathematical challenges, but also theological objections. He refuted the argument that God had given an immortal soul to humans, but not to any other animal or to machines, hence no animal or machines could think.
He made it clear that, in attempting to make machines think, we (the society and humans) were not standing against God's will. He argued that it wasn't the first time theology and science would take seemingly contradicting positions.
He pointed out that the Copernican theory disagreed with the biblical verse below. Copernicus had proposed that the sun was the center of the universe and the earth and the other planets revolved around it.
"He laid the foundations of the earth, that it should not move at any time" (Psalm 104:5)
Alan Turing also laid out his views of the future for thinking machines.
"I believe that in about fifty years' time it will be possible, to program computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
I believe further that no useful purpose is served by concealing these beliefs. The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any improved conjecture, is quite mistaken. Provided it is made clear which are proved facts, and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research."
The practical challenges of even attempting AI experiments were huge in those days. Computational power and data storage capacity (or lack the thereof) were the largest bottlenecks. Computers not only had to store words, but also needed to understand the relationships between them in order to conduct meaningful communication.
There were scientists and researchers who were bullish that machines would have the general intelligence of a human being. They came up with different timelines for "AI Singularity." Despite AI winters when the technology was viewed as hype, the research community made consistent progress; in the 1980s the concepts of deep learning were introduced by John Hopfield and David Rumelhart, and the field of AI started to get a new boost through a surge in research funding.
The first practical breakthrough perhaps happened in 1996 when grandmaster Gary Kasparov was defeated by IBM's Deep Blue in a game of chess. Deep Blue was a computer program, and the result of the game was hugely publicized and was considered a big breakthrough in the field at that time. Around the same time, Microsoft integrated a speech recognition software developed by Dragon Systems into its Windows operating system.
The scientific community had realized that AI was not just a program that miraculously behaved like a human. It was an approach that used algorithms built using high volumes of good-quality data. This allowed algorithms to get a better understanding of the context in which the machine was operating in, and provide relevant answers as outputs.
The imitation game
Another contribution from Turing was the Turing test. The test was called The Imitation Game. The game was constructed as follows:
- There were three rooms, each connected through computer screens and keyboards to the others.
- In the first room sat a human, in the second a computer, and in the third a "judge."
- The judge's job was to identify (through five minutes of interaction) the human and the machine based on their responses.
- Turing proposed that if the judge were less than 50% accurate in identifying the human or the machine, it meant that the judge was as likely to pick either the human or the computer. That made the computer a passable simulation of a human being and intelligent.
Over the years, there were several simplifications of this experiment that programmers used as a litmus test for the intelligence of their solutions. Some subsequent researchers have criticized the ability of the Turing test in identifying genuinely intelligent systems, whilst other papers have been written in defense of the test. Irrespective of that, the contribution of Alan Turing to the field of Artificial Intelligence is no doubt immense. He was the visionary who sowed seeds for future generations to reap the benefits.
Avatars of AI
I often find people using AI interchangeably across many of the more detailed branches of AI listed out in Figure 4. Oftentimes, using AI to refer to a machine learning solution gets challenged. The way I see it is that these sub-clusters of AI focus on leveraging data to make better decisions. In some scenarios this intelligence augments humans, and sometimes machines make the decisions themselves and learn from them.
The algorithmic details of AI, like Neural Networks, clustering, and Bayesian networks are all covered as techniques under branches of AI:

Figure 4: Branches of AI
Machine learning is perhaps the most common form, where patterns are recognized in data and predictions are made using these patterns. The pattern recognition process involves feeding a lot of data to algorithms, and developing the solution to learn with this training data. After the machine has learned from the training data, it is used to apply the learning to a new set of data. If the new set of data exhibits similar patterns to the training data, then the machine highlights them. Therefore, the breadth and the quality of the training data is very critical in the learning process. Let me explain this with an example I was involved in.
I had the privilege of sitting on the IBM Watson evaluation board, when I was at PwC in 2014. We were evaluating the feasibility of using IBM Watson for regulatory solutions. Since 2008, the financial regulators of the UK and EU had come up with several complex regulations, and banks were expected to understand volumes of regulatory text and ensure compliance. Thousands of lines of complex regulatory text to cover, with complementary and conflicting regulatory rules and frequently changing regulations all made it very hard for the banks to stay on top of their regulatory obligations.
The IBM Watson solution that we were evaluating would take all the regulatory texts (in legal language) as inputs. We would also provide as inputs natural language versions of those regulatory texts (where available). Two regulatory experts would work with IBM Watson, and in what they called the "Watson going to School" process, the AI engine would get trained in the regulations. The experts would ask the AI a question regarding a regulation, and when the answer was provided, the experts would give a thumbs up or thumbs down, depending on the quality of the answer. This helped the AI engine learn over time and get better at answering simple, mundane questions on a huge pile of regulatory texts.
In this case, the problem is pretty clear – we are asking the machine to look into the regulatory text and provide relevant answers. However, there are instances where despite a lot of data being available, analysts don't know what they are looking for in the data. We use a method called unsupervised learning to identify the issues and anomalies that the data has. Using that, we get to the process of understanding the underlying variables that influence the anomalies.
Robotics is another area where there have been significant strides over the last 10 years or so. Countries like South Korea have taken robotics to a whole new level by deploying about 700 robots per 10000 employees in the manufacturing industries. The numbers on the following chart represent 2016 figures. The latest figures show that the numbers for South Korea have increased to 710 robots for every 10000 employees.
Robotics is used in conducting surgeries, conducting rescue operations that are potentially harmful to humans, customer services in banks, logistics, construction and even agriculture. Several of these uses are in prototype/pilot stages, but are showing promising signs. Industrial applications for robots are starting to gain clarity, especially in areas where there are repetitive and mechanical tasks.
As a result, low-skill, high-frequency, mundane jobs will be taken over by machines. In the asset management industry, AI is used to make portfolio management decisions as the machine can go through millions of data points to arrive at a decision much better than a human brain can.

Figure 5: Countries with the highest density of robot workers in 2016
Applications of AI in today's world are unlimited. Every day new avenues and real-world opportunities open up for AI. The availability of data has made the AI boom possible but has also opened up a whole new can of worms around data privacy, data ownership, and data security. With centralized monopoly of consumer data, it is often unclear how our data is being used, shared, and monetized. This is where a technology like Blockchain could help.
- Boost C++ Application Development Cookbook(Second Edition)
- Learning Spring 5.0
- Rust實戰(zhàn)
- Instant Typeahead.js
- iOS應(yīng)用逆向工程(第2版)
- PostgreSQL Replication(Second Edition)
- 程序是怎樣跑起來的(第3版)
- Asynchronous Android Programming(Second Edition)
- jQuery Mobile移動應(yīng)用開發(fā)實戰(zhàn)(第3版)
- Android應(yīng)用案例開發(fā)大全(第二版)
- 51單片機C語言開發(fā)教程
- jQuery炫酷應(yīng)用實例集錦
- Building Android UIs with Custom Views
- Android嵌入式系統(tǒng)程序開發(fā):基于Cortex-A8(第2版)
- App Inventor少兒趣味編程動手做