CIO Perspectives: Impact of Technological Singularity on Analytics

Strong Artificial Intelligence (AI).

Recursive self-improvement.

Exponential growth.

Technology singularity is a hypothetical event where by leveraging artificial general intelligence (known as ‘strong AI’) a computer could theoretically be capable of recursive self-improvement (resigning itself) – building a computer better than itself.

Applying recursive improvements to big data means that structures unknown to humans today, could be created within a decade. Applying recursive improvements to analytics means that correlations that today have to be linear, tomorrow could be non-linear and although appearing seemingly unrelated, in fact have extreme distance connections. Applying recursive improvements to biometric sensors could create new unique identifying characteristics current unknown an unmonitored.  This opens possibilities that through enabled smart devices we are able to ascertain new ways of establishing identity such as gait analysis (someone’s walking style, formed through wearable device data recorded in the last 30 seconds).

Superintelligence

Futurist Ray Kurzweil, the principal inventor of the first charge-coupled device flatbed scanner – the first omni-font optical character recognition – the first print-to-speech reading machine for the blind – the first commercial text-to-speech synthesizer, believes that singularity will occur around 2045. Vernor Vinge, argues that artificial intelligence, human biological enhancement, or brain–computer interfaces could be possible causes of the singularity and that singularity would occur sometime before 2030.

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended — Vernor Vinge

In Vinge’s 1993 article ‘The Coming Technological Singularity’ he explains that once true superhuman artificial intelligence is created, no current models of reality will be sufficient to predict beyond it. When will the era of the robots start? It will be shortly after the death of the recommendations engines. A recommendation engine (recommender system) is a tool that predicts likeness (may like, may not like) among a list of given items. These preference recommendations could be around books, software, travel, and many other areas. This however is not artificial intelligence (AI); this is a recommendation engine. A recommendation engine uses two pieces of known information, leveraging typically either collaborative filtering (arrives at a recommendation that’s based on a model of prior user behavior) or content-based filtering (recommendations on the basis of a user’s behavior, e.g. historical browsing) to determine your likes or dislikes.

In contrast artificial intelligence, takes something known and creates something unknown.

Netflix uses a form of machine learning, a subfield of AI, that produces results for learning, prediction and decision-making. Collaborative filtering, drives the Netflix engine, commonly used for research in combination with the Pearson correlation. The Pearson correlation measures the linear dependence between two variables (or users in this case) as a function of their attributes (Jones, 2013). Many algorithms become less reliable as the population sample grows exceptionally. 

The Pearson correlation sifts down the sampling population to neighborhoods based on similarity (reading the same books, traveling to the same locations).

This approach produces targeted predictions that are accurate within a small population sample, while leveraging the population data, and are relevant for a subsection or neighborhood of users.

Turing Test Evolves

John McCarthy, cut the term ‘Artificial Intelligence’ in his 1955 proposal for the 1956, Dartmouth Conference. He also invented the Lisp programming language. Until 1956 this space was referred to as machine intelligence. When the conversation moves to the topic of AI, it’s not long before talk of the Turing Test arises. Alan Turing in his 1950 paper “Computing Machinery and Intelligence” (Turing, 1950; p. 460) was first published in Mind (a British peer-reviewed academic journal currently published by Oxford University Press on behalf of the Mind Association). It was within this seminal paper that the concept of what is now considered the Turing Test (TT), was introduced. The TT involves three participants in isolated rooms: a computer (which is being tested), a human, and a judge (also human). Typing through a terminal the computer and the human both try to convince the judge that they are human. The computer  is the winner when the judge can’t consistently tell which is which. This is the defacto test of artificial intelligence.

Stevan Harnad, a cognitive scientist, contends that the TT has evolved since 1965 and today’s Turing Test asks the question:

“Can machines do what we (as thinking entities) can do?”

Harnad also suggests that this test is not designed to trick the judge that a computer is a human, but rather establish AI’s empirical goal to generate human scale performance capacity. The Turing Test represents what the science of AI intends to do – until then AI remains a machine. The term ‘intelligence’ will only be bestowed to a computer, after successfully passing the TT test.

Technological Singularity

George Rebane frames this well, in his 2010 article, “Singularity? What’s that?” by stating that, “the event when machines reach par intelligence with humans is known as the technological singularity, or simply the Singularity.” The concept of singularity was named by mathematicians and author Vernon Vinge because singularity in physics and mathematics are ‘points’ beyond which is undefined and a new state where normal rules no longer can be applied.

Consider the advancements of digestibles, wearables, and internables that even 20 years ago were basically dreams of weirdo techies and sci fi-enthusiasts. How has the advancement of distributed systems and distributed processing transformed industry over the last 10 years? Each of these ideas were previously undefined.

The below chart does a good job of illustrating the concept of singularity.

This abstraction of Singularity becomes a form of je ne sais quoi arriving at the future of man’s innovation.

Moore’s Law

Does Moore’s law demonstrate a valid path towards the realization of singularity? Gordon Moore, co-founder of Intel and Fairchild Semiconductors, made the observation that the number of transistors in a dense integrated circuit doubles approximately every two years.

Moore’s law actually isn’t a physical law or natural law but rather the projection of a future state.

Based on Moore’s law the semiconductor manufacturing process met a 14nm node before 2014, but would need to create a 10nm node by 2016-2017, to ensure the rule still holds. Intel released papers about their 14nm node technology in February of 2015. However, the first Intel chip with 10nm technology is planned to be released late 2016 or early 2017 (Anthony, 2015). Will the 10nm node be created in time to maintain Moore’s Law? As of today, it’s looking pretty good for Intel.

Step Off That Ledge

The degree of dichotomy of the future states between Vinge and Miguel Nicolelis, a top neuroscientist at Duke University, is almost comical. Nicolelis says that technological singularity is a ‘bunch of hot air’ and that computers will never create a human-like brain.

Maybe this is all just fatuous or maybe we are exploring the edges of the new innovation of man.

 

References                                  

Anthony, S. (2015). Intel forges ahead to 10nm, will move away from silicon at 7nm | Ars Technica. Retrieved November 29, 2015, from http://arstechnica.com/gadgets/2015/02/intel-forges-ahead-to-10nm-will-move-away-from-silicon-at-7nm/

David Wood. (2013). The Lead Up to the Singularity (online image). Retrieved November 29, 2015, from http://www.33rdsquare.com/2013/09/david-wood-on-lead-up-to-singularity.html

Harnad, S. (1992). Connecting Object to Symbol in ModelinIn: A. Clark and R. Lutz (Eds) Connectionism in Context Springer Verlagg Cognition., 75–90.

Jones, M. T. (2013). Recommender systems, Part 1: Introduction to approaches and algorithms. Retrieved November 29, 2015, from http://www.ibm.com/developerworks/library/os-recommender1/

Rebane, G. (2010). Singularity? What’s that? (w/appendix) – Rebane’s Ruminations. Retrieved November 29, 2015, from http://rebaneruminations.typepad.com/rebanes_ruminations/2010/04/singularity-whats-that-wappendix.html

Turing, Alan (1950), “Computing Machinery and Intelligence”, Mind LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423, retrieved 2008-08-18

 

Peter Nichol, empowers organizations to think different for different results. You can follow Peter on Twitter or on his blog. Peter can be reached at pnichol [dot] spamarrest.com.

Previous articleHealthcare Decision Support Tools: Consumer’s Rational Approach (Part 2 of 2)
Next articleCIO Healthcare Hurdles: Predictions for 2018
Peter is a technology executive with over 20 years of experience, dedicated to driving innovation, digital transformation, leadership, and data in business. He helps organizations connect strategy to execution to maximize company performance. He has been recognized for Digital Innovation by CIO 100, MIT Sloan, Computerworld, and the Project Management Institute. As Managing Director at OROCA Innovations, Peter leads the CXO advisory services practice, driving digital strategies. Peter was honored as an MIT Sloan CIO Leadership Award Finalist in 2015 and is a regular contributor to CIO.com on innovation. Peter has led businesses through complex changes, including the adoption of data-first approaches for portfolio management, lean six sigma for operational excellence, departmental transformations, process improvements, maximizing team performance, designing new IT operating models, digitizing platforms, leading large-scale mission-critical technology deployments, product management, agile methodologies, and building high-performance teams. As Chief Information Officer, Peter was responsible for Connecticut’s Health Insurance Exchange’s (HIX) industry-leading digital platform transforming consumerism and retail-oriented services for the health insurance industry. Peter championed the Connecticut marketplace digital implementation with a transformational cloud-based SaaS platform and mobile application recognized as a 2014 PMI Project of the Year Award finalist, CIO 100, and awards for best digital services, API, and platform. He also received a lifetime achievement award for leadership and digital transformation, honored as a 2016 Computerworld Premier 100 IT Leader. Peter is the author of Learning Intelligence: Expand Thinking. Absorb Alternative. Unlock Possibilities (2017), which Marshall Goldsmith, author of the New York Times No. 1 bestseller Triggers, calls "a must-read for any leader wanting to compete in the innovation-powered landscape of today." Peter also authored The Power of Blockchain for Healthcare: How Blockchain Will Ignite The Future of Healthcare (2017), the first book to explore the vast opportunities for blockchain to transform the patient experience. Peter has a B.S. in C.I.S from Bentley University and an MBA from Quinnipiac University, where he graduated Summa Cum Laude. He earned his PMP® in 2001 and is a certified Six Sigma Master Black Belt, Masters in Business Relationship Management (MBRM) and Certified Scrum Master. As a Commercial Rated Aviation Pilot and Master Scuba Diver, Peter understands first hand, how to anticipate change and lead boldly.