Do you have a data strategy to achieve better organizational analytics?

Every company is talking about analytics, but only a handful have a simple data analytics strategy.

Big-data analytics, actionable insights, and powerful outcomes are the de facto expectations for data-analytics programs. Is your data strategy aligned to deliver those results?

Organizations are seeking sophisticated analytical techniques and tools to gain more profound insights into how they can capitalize on the blue ocean of data analytics. Listen this week at your office and you’ll undoubtedly hear whisperings about harnessing the power of analytics. It might not be called data management or big-data analytics, and the questions might be more subtle, such as:

  • How do we discover new insights into our products?
  • Which operational capabilities will deliver the highest ROI?
  • How do we leverage our data to generate better strategies and execute with improved confidence?

Managers and leaders alike are searching for approaches to tap into the value of big-data analytics. What exactly is a big-data analytics strategy?

A comprehensive data analysis foundation

Set the frame mentally of the building blocks of a world-class data-analytics program. It doesn’t need to be perfect. Identify the critical components that make up a data-analytics foundation:

  • Presentation layer: where the dashboards and workflows live
  • Bigdata processing and analytics layer: the base for pattern matching, mining, predictive modeling, classification engines, and optimization
  • Data-storage and management layer: relational data systems, scalable NoSQL data storage, and cloud-based storage
  • Data-connection layer: data sensing, data extraction, and data integration

The analytics framework can also be segmented into four phases: descriptive, diagnostic, predictive, and prescriptive. The descriptive phase defines what happened. The diagnostic phase determines why it happened. The predictive phase forecasts what will happen. The prescriptive phase identifies what action to take. Together, these phases help leaders classify the types of questions they’re receiving. These can also highlight capability deficiencies.

Big-data analytics frameworks

Frameworks—in data analytics—provide an essential supporting structure for building ideas and delivering the full value of big-data analytics.

Does it matter whether your framework is bulletproof? No, it doesn’t. It’s important that the framework provide a set of guiding principles to ground thinking. Establishing common principles prevents revisiting the same topics.

Think of a data-analytics framework as an ontological approach to big-data analytics. There’s one framework that’s particularly useful—the annual Big Data Analytics World Championships for Business and Enterprise—which stresses the following:

  • Practical concepts: predict future outcomes, understand risk and uncertainty, embrace complexity, identify the unusual, think big
  • Functions: decide, acquire, analyze, organize, create, and communicate
  • Analytics applications: business insights, sentiment analysis, risk modeling, marketing-campaign analysis, cross-selling, data integration, price optimization, performance optimization, recommendation engines, fraud detection, customer-experience analytics, customer-churn analytics, stratified sampling, geo/location-based analysis, inventory management, and network analysis
  • Skills and technical understanding: data mining, statistics, machine learning, software engineering, Hadoop, MapReduce, HBase, Hive, Pig, Python, C/C+, SQL, computational linear algebra, metrics analysis, and analytics tools (SAS, R, MATLAB)
  • Machine learning: machine-learning tools, supervised learning, Monte Carlo techniques, text mining, NLP, text analysis, clustering techniques, tagging, and regression analysis
  • Programming: Python basics, R basics, R setup, vectors, variables, factors, expressions, arrays, lists, and IBM SPSS
  • Data visualization: histogram, treemap, scatter plot, list charts, spatial charters, survey plots, decision trees, data exploration in R, and multivariate and bivariate analyses
  • Fundamentals: matrices and linear algebra, relationship algebra, DB basics, OLAP, CAP theorem, tabular data, data frames and series, multidimensional data models, ETL, and reporting vs. BI vs. analytics
  • Data techniques: data fusion, data integration, transformation and enrichment, data discovery, data formats, data sources and acquisition, unbiased estimators, data scrubbing, normalization, and handling missing values
  • Big data: Setup Hadoop (IBM, Cloudera, Hortonworks), data replication principles, name and data nodes, Hadoop components, MapReduce fundamentals, Cassandra, and MongoDB
  • Statistics: ANOVA, Skewness, continuous distributions (normal, Poisson, Gaussian), random variables, Bayes theorem, probability distributions, percentiles and outliers, histograms, and exploratory data analysis

Use these eleven lenses to define your data-analytics strategy. Unfortunately, the framework won’t replace a great leader who understands how to execute these programs successfully. It will, however, help steer the conversations in the right direction.

If your team is less familiar with the principles of big-data analytics, use these questions as a guide:

  1. Practical concepts: What future outcomes do we want to predict?
  2. Functions: Do we have a methodology or process to mature data-analytical requests?
  3. Analytics applications: Which insights are we seeking to generate?
  4. Skills and technical understanding: What skills and competencies are critical for producing new organizational insights?
  5. Machine learning: Which business capabilities would benefit from enhanced machine-learned capabilities?
  6. Programming: What are the most important technical programming skills to mature within the organization?
  7. Data visualization: Which visual representations lead to the best decisions?
  8. Fundamentals: What layer has the greatest potential for transformation—how we make decisions involving presentation, big-data processing, data storage, or the data-connection layer?
  9. Data techniques: Which data transformation techniques are essential to move us from data to information?
  10. Big data: Based on our business architecture, which technology components are foundational to providing intelligent data analytics?
  11. Statistics: How do we envision data being categorized and analyzed?

Making your data strategy actionable

There are thousands of ways to develop a big-data program but only one method to measure success: Did we achieve the outcomes desired?

Leveraging top-down and bottom-up interaction models helps to lock in value and prevent leakage. Use the below categories to group ideas in your process of forming an actionable plan. Once this exercise is complete, place each interaction on the y-axis.

  1. Overarching strategy: defines the value and categories of results
  2. Tactics: articulates how value will be created
  3. Measurement plan: identifies program success metrics, KPIs, and tracking mechanisms for tracking to the plan
  4. Analytics: captures predictive modeling to forecast experiments—largely to perform correlation analysis—leading to specific actions
  5. Optimization opportunities: maximizes investments for the agenda with the highest probability to achieve the greatest outcome

hen list the three-tiered approach against the x-axis:

  • Quick wins: under 30 days
  • Intermediate wins: 31 to 90 days
  • Long-term wins: greater than 90 days

The result is a graphical view of your data strategy. This approach will help your team generate ideas and determine a general sequence of delivery, weighted by the idea that will most significantly impact the organization.

The secret of successful big-data analytics programs

Different stakeholders will be using your organization’s data for different reasons. Perspectives matter. Data analytics are changing the way company decisions are being made. Data engineering, domain expertise, and statistics each can play a role in the discipline of data science for your organization. Understanding concepts such as mathematical techniques is increasingly more important to extract the maximum information from large data sets. Roles we hired for—even two years ago—don’t have the raw skills required to communicate the salient features of data succinctly.

Using a combination of “big” data and “little” data creates the foundation for quick wins. Sure, after reading an entire book on a particular subject you’d gain more insights, but often even reading a chapter or two can offer substantial perspectives. Start small with little data and build strategically to achieve big-data analytics success.

The capabilities and roles of world-class, master data management

Business strategy achievement requires data management capabilities. Define these first.

Data management enables the storing of everything from genomic data to Xbox scores to your Pandora playlists. If the data were unified, we’d have the beginning of master data management.

Organizations have data throughout their environment that provide single views of key data entities common across their organization. Data management provides a single view of data. Master data management provides a complete view of your organization’s data.

Broadly speaking, civilization has witnessed five generations of data management following manual processing using paper and pencil:

  1. Mechanical punched card: data processing
  2. Stored program: sequential record processing
  3. Online network: navigational set processing
  4. Nonprocedural: relational databases and client-service computing
  5. Multimedia databases: object-relational databases with relationships

Data models, scaling, automation, integration, and workflows increase the complexity of generating usable information from data.

Technology leaders who are thinking ahead must answer three questions to stay competitive:

  1. Why is master data management the backbone of an organization?
  2. What capabilities are required for business-strategy achievement?
  3. How do these capabilities translate into tangible roles within my organization?

The business case for master data management

Master data management maximizes business outcomes with improved data integrity, visibility, and accuracy. The result is better decision-making. The efficiency and effectiveness of decisions are at the heart of every organization. Are you deciding on the best location for that off-site meeting? You need data. A list of the top 1,000 venues is interesting, but a cross-section of the top ten sites—as ranked by attendees over the last three years—is more useful. Are you developing your business strategy? A summary of 100 business cases with corresponding business strategies is useful, but a revised view of only business strategies that were successful provides more meaningful information.

We collect data. We assemble information. We create knowledge. It’s knowledge that we’re striving to generate. To get there, we need people, processes, and tools to enable the best decision-making possible.

Better decision-making, reduced operational friction, and repeatable processes all benefit from understanding how your organization values and utilizes information. Achievement requires a master data-management program.

We’re talking about capabilities

Competencies and capabilities are different. Competencies measure how a company can deploy resources and use them to achieve business strategies. Capabilities, on the other hand, are the abilities, resources, activities, routines, and processes to build a competitive advantage. Competencies are skills. Capabilities are abilities.

Here’s another way to delineate between competencies and capabilities. Competencies are individual characteristics and capabilities are organizational. Let’s address the organizational elements.

An organization’s capabilities are core functions or the secret ingredients for success.

Master data management has three, high-level capabilities: business capabilities, information-services capabilities, and data-management capabilities.

Business capabilities

  • Governance: the political process of changing organizational behavior by an established system of who has the right to make decisions
  • Stewardship: business ownership of data quality for one or more subject areas; deduplication; maintaining hierarchies; and developing business rules
  • Platform and architecture: technology and data-management assets including data modeling, data architecture, and metadata management (data dictionaries, glossaries, and data lineage)
  • Security: data availability, protection, disaster recovery, and data redundancy

Information-management capabilities

  • Intelligence: ad-hoc query and real-time dashboard capabilities; making the data usable
  • Analytics and visualization: core reporting, advanced analytics and risk management, regulatory and statutory reporting
  • Workflow: process-model data flows
  • Quality: dimensions of data quality
  • Integration: model connection interfaces to entities

Data-management capabilities

  • Operations: operational transactions and business processes of the enterprise
  • Data acquisition: ELT, audit, balance and control, and testing
  • Curation: the active, ongoing management of data throughout its lifecycle from creation to archiving or deletion.
  • Science: data mining; establishment of methods, processes, algorithms, and systems to extract knowledge or insights from data in various forms, either structured or unstructured
  • Performance: enterprise performance management of thresholds and tolerances

Design of progressive data management programs accounts for the social, business, and technological changes that can affect how data is managed throughout an organization. Stay focused on which specific organizational capabilities will be required for your master data-management program to provide better insights into your data.

The roles of data management

Despite your best efforts, eventually the conversation will shift to who’s doing what to support the necessary data activities. The roles below are included as illustrative models of potential general role descriptions that address the majority of organizational data-management activities. Roles can be compacted if teams are lean or expanded if organizational needs are large.

  • Data architect: identifies objects and data elements to be managed, specifies the policies and business rules for how master data is created and maintained, describes any hierarchies, taxonomies, or other relationships important to organizing or classifying objects, and explicitly assigns data-stewardship responsibility to individuals and organizations
  • Data custodian: has ownership of the data, maintains accuracy and currency of the assigned data, and determines the security classification level of the data
  • Data steward: implements data policies, standards, procedures, and guidelines concerning data access and management
  • Data business analyst: collects, manipulates, and analyzes data
  • Project manager: appoints and supports data stewards in their areas of responsibility
  • Business relationship manager: determines which data requests will be queued and executed
  • Business intelligence specialist: serves as the business and technical subject-matter expert on data or information assets
  • Database administrator: is responsible for storage, organization, capacity planning, installation, configuration, database design, migration, performance monitoring, security, and troubleshooting as well as backup and data recovery
  • Data scientist: applies knowledge and skills to conduct sophisticated and systematic analyses of data to produce insights
  • Data engineer: develops, constructs, tests, and maintains architectures such as databases and large-scale data-processing systems; integrates, consolidates, and cleanses data
  • Data developer: develops, tests, improves, and maintains new and existing databases to help users retrieve data effectively

Don’t assume personnel are clear on their responsibilities. First, create each job description. Second, validate these job descriptions within the organization to ensure that gaps and overlap are addressed. Third, develop a RACI-accountable and responsible metric to assign ownership. Fourth, develop job postings. This is the job description jazzed up to represent the flavor of the organization and the team where the role resides.

Lastly, many roles require training. Data stewards and data custodians immediately come to mind. These roles have specific functions to perform. However, it’s not sufficient to only train folks in new roles. The organization as a whole must be educated to drive the change collectively. Master data-management isn’t a separate movement; the change needs to be organizational.

GDPR: Are you ready for the new face of data privacy?

The CIO’s guide to the breadth and depth of GDPR.

The right to privacy is a long-standing concept that goes back to English Common Law. The Castle Doctrine gives us the familiar phrase, “A man’s home is his castle.” The castle can be generalized as any site that’s private and shouldn’t be accessible without permission of the owner. The idea of privacy quickly expanded to include recognition of a person’s spiritual nature, feelings, and intellect. It’s the right to be left alone.

The European Union (EU) General Data Protection Regulation (GDPR) replaced the Data Protection Directive 95/46/EC to strengthen and unify data protection for individuals within the EU and address the export of personal data outside the EU. The EU parliament passed the Regulation—after four years of debate—on April 14, 2016, with an effective date of May 25, 2018.

Modern U.S. tort law

There are four categories of modern tort law in which the concept of “invasion of privacy” is used in legal pleadings. These four concepts are remarkably similar to the revisions of GDPR:

  1. Intrusion of solitude: intrusion into one’s private quarters
  2. Public disclosure of private facts: the dissemination of truthful, private information
  3. False light: the publication of facts that place a person in a false light
  4. Appropriation: the unauthorized use of a person’s name or likeness

The intrusion of solitude refers to a person intentionally intruding—either physically or electronically—into the private space of another. Typical examples include hacking into someone else’s email or setting up a video camera to secretly view a person unknowingly.

The public disclosure of private facts is an act of publishing information that wasn’t meant for public consumption. This concept is different than libel or slander, where truth isn’t a defense for invasion of privacy.

False light specifically refers to the tort of defamation. Communication of false statements or information that hart the reputation of an individual person, business, product, group, government, religion, or nation all fall within this definition.

Appropriation of name or likeness prevents—often at a state level—the use of a person’s name or image, without consent, for the commercial benefit of another person. This protects a person’s name from commercialization in a similar fashion to how a trademark action protects a trademark.

Modern tort law extends beyond the protection of the individual. However, there’s one grey area: how information is shared. GDPR directly addresses the need to protect personal information, outside the borders of a country, for the safety of its citizens.

The threat is here

There were 1,579 data breaches and over 179 million records exposed in 2017 according to the Identity Theft Resource Center’s 2017 year-end report—a dramatic 44.7 percent increase over 2016 data breaches. The breaches and records lost were spread across industries:

  • Banking: 134 breaches, 3.1 million records
  • Business: 870 breaches, 163 million records
  • Education: 127 breaches, 1.4 million records
  • Government: 74 breaches, 6 million records
  • Healthcare: 374 breaches, 5 million records

The threat to citizens’ privacy isn’t coming. This threat has already arrived.

GDPR policy in a data-driven world

Since the original 1995 directive, GDPR has established key principles that govern data usage, storage, and dissemination. The Regulation expands four core areas:

  1. Territorial scope: this extends the jurisdiction of GDPR to all companies processing the personal data of subjects residing in the EU
  2. Penalties: an organization can be fined up to 4 percent of annual global turnover or €20 Million (whichever is greater)
  3. Consent: long, complex terms and conditions and data requests must be intelligible
  4. Data-subject rights: breach notification, right to access, right to be forgotten, data portability, privacy by design, and data-protection officers (DPOs) have been clarified, often increasing the scope of GDPR

Territorial scope states that if the data includes subjects from the EU, the company must comply with the Regulation. This area also clarified the processing of personal data by controllers or processes—regardless of whether the data processing happens in the EU. If EU personal data is touched, your organization is impacted. The penalties are severe, and companies are taking notice. In addition to the 4 percent penalty, there’s a tiered approach to fine companies’ 2 percent for not having their records in order (EU article 28). Additionally, not fully and promptly notifying the supervising authority of a data breach will be costly. It’s interesting to note that the “controllers and processors” make it clear that cloud and SaaS providers aren’t exempt from GDPR enforcement. Consent, although previously technically available, was often buried within unintelligible terms and conditions. Consent now must be in clear and plain language, including easy-to-grant or withdraw consent.

The data-subject rights cover six areas in more depth:

  1. Breach notification: inform the supervising authority within 72 hours of the breach
  2. Right to access: notify individuals if their personal information is being processed and for what purpose
  3. Right to be forgotten: withdraw consent and erase all data traces (EU article 17)
  4. Data portability: provide data in common-use and machine-readable form
  5. Privacy by design: design data protections into systems—versus a system addition
  6. Data-protection officers: appointment of DPOs is mandatory for processing operations that require regular and systematic monitoring of data-subjects

Processing and using personal data

These onerous obligations replace the old Directive and apply to all twenty-eight Member States of the EU—from the UK to Estonia. GDPR encourages companies to re-examine organizational policies, standards, guidelines, procedures, and processes.

As your organization assesses GDPR impact, there are 10 questions to keep in mind:

  1. How does expanded territorial reach impact your customers, providers, and partners?
  2. Do you have sufficient DPOs in place with the appropriate programs?
  3. Are data accountability and privacy included in the business process and system design?
  4. Are the tasks of data processors defined into organizational roles with appropriate accountability and responsibilities?
  5. Has your organization revisited corporate policies and procedures while taking into consideration the broad-reaching scope of GDPR?
  6. Is consent to access the array of products, services, and interactions written in clear and plain language?
  7. Do customers understand how to clearly grant or withdraw consent?
  8. Have risk assessments been performed to quantify the economic and financial risk or non-compliance that could result in fines?
  9. Is the process for data-breach notification streamlined to ensure compliance within the 72-hour guideline?
  10. Does the organization have clear guidelines on the definition of a “serious” breach?

Companies have a lot to do before GDPR becomes effective on May 25, 2018. Stay on top of the latest GDPR developments by following the Article 29 Data Protection Working Party (WP29). This working group is an independent European Union Advisory Body on Data Protection and Privacy and includes representatives from each of the EU member states. Together, we can improve how big data is processed while limiting the financial risk to our organizations.

Breaking down artificial intelligence to form a starting point for adoption

To leverage, communicate and sell the power of artificial intelligence, we first must capture its essence.

Artificial intelligence will humanize recommendation engines, improve the accuracy of logistics engines, and represent a monumental change in the friendliness of chatbot engines. Learning new languages (Duolingo), finding new dinner plans (Replika) and making photography exciting again (Prisma) is how our business partners will be introduced to the potential of artificial intelligence.

How we plan for AI

If I asked you how to build a house, you’d have a series of steps in mind. When asked how to validate a company’s technology security perimeter, other action steps come immediately to the forefront. And when booking a vacation to Brazil, a clear approach to get you on the beach fast rushes to the mind.

We’re of course not talking about building houses, creating security resilience, or booking vacations. We’re talking about how to introduce business leaders, scientists and medical professionals to the power of artificial intelligence. So where do we start? What’s our first step?

Three steps toward AI enlightenment

We start with a framework for all intelligence agents. Artificial intelligence can be separated into two categories: (1) thought processes and reasoning and (2) behavior. Whether you lean more toward the mathematics and engineering side (rationalist) or closer to the human-centered approach (behavior), the heart of AI is trying to understand how we think.

The first step: Decide which of the four categories of artificial intelligence the enterprise will explore.

  1. Thinking humanly: systems that think like humans
  2. Acting humanly: systems that act like humans
  3. Thinking rationally: systems that think rationally
  4. Acting rationally: systems that act rationally

The second step: determine the intent of our artificial intelligence initiative.

Thinking humanly (cognitive modeling) blends artificial intelligence with models—as in the case of neurophysiological experiments. Actual experiments in the cognitive sciences depend on human or animal observations and investigations. Acting humanly (Turning Test) attempts to establish a line between non-intelligence and satisfactory intelligence. Thinking rationally captures “right thinking” in computer language. Coding logic is fraught with challenges, since informal knowledge doesn’t translate well to formal notation. Acting rationally is about acting. Agents perform acts, and “rational agents” can autonomously maneuver, adapt to change and evolve (learned intelligence).

The third step: identify the capabilities required.

Thinking humanly capabilities:

  1. Observation
  2. Matching human behavior
  3. Reasoning approach to solving problems
  4. Solve problems
  5. Computer models to simulate the human mind

Acting humanly capabilities:

  1. Natural language processing
  2. Automated reasoning
  3. Machine learning
  4. Knowledge representation
  5. Computer vision and robotics

Thinking rationally capabilities:

  1. Codify thinking
  2. Pattern argument structures
  3. Codify facts and logic (knowledge)
  4. Solve problems in practice (not principle)
  5. Solve problems with logical notation

Acting rationally:

  1. Thought inferences
  2. Adapt to change (agents, chatbots)
  3. Analyze multiple correct outcomes
  4. Operate autonomously
  5. Create and pursue objectives

Step beyond

Artificial intelligence, since the mid-1940s, has moved across the plane of learning from philosophy to control theory. The philosophy of logic and reason established the foundations of learning, language and rationality. Mathematics formally represented computations and probabilities. Psychology illuminated the phenomena of motion and psychophysics (experimental techniques). Linguistics studied morphology, syntax, phonetics and semantics. Neuroscience poked at the function of the nervous system and brain. Control theory combines the complexities of dynamic systems and how behavior is modified by feedback.

Navigation, neural networks, gene expression, climate modeling and production theory all stem from control systems engineering.

It’s easy to become tangled up in the possibilities of artificial intelligence. First, we must decide which of the four categories of artificial intelligence we will explore. Second, we must determine the intent of our artificial intelligence initiative. Third, we must identify the capabilities required. In sum: Start with a plan and clarify your first three steps for your organization to realize the potential of artificial intelligence.

Peter B. Nichol, empowers organizations to think different for different results. You can follow Peter on Twitter or his personal blog Leaders Need Pancakes or CIO.com. Peter can be reached at pnichol [dot] spamarrest.com.

Peter is the author of Learning Intelligence: Expand Thinking. Absorb Alternative. Unlock Possibilities (2017), which Marshall Goldsmith, author of the New York Times No. 1 bestseller Triggers, calls “a must-read for any leader wanting to compete in the innovation-powered landscape of today.”

Peter also authored The Power of Blockchain for Healthcare: How Blockchain Will Ignite The Future of Healthcare (2017), the first book to explore the vast opportunities for blockchain to transform the patient experience.

BRMs: the CIO’s new change agent

The role of the business relationship manager can be summarized in one word: change. Begin by defining the BRM role.

The BRM role is evolving; we don’t know the end state. The need to stimulate, surface, and shape demand has never been stronger. Blockchain is starting to walk. AI is getting smarter. And consumer tech is almost friendly. The transformational nature of change is the realization of strategies, plans, and budgets. How we define the role of the change agent directly correlates to the pace of adoption. In this case, the change agent is the business relationship manager.

Footprints of tomorrow’s future

The Business Relationship Management Professional (BRMP), offered by the Business Relationship Management Institute (BRM Institute), is the foundational certification for business relationship managers. The Business Relationship Management Institute was founded in 2013 and, in 2014, published the first edition of the BRMP Guide to the BRM Body of Knowledge, aka the BRMBOK. The first BRMP training for certification was also held in 2014.

For many of us, we recall a similar journey of the Project Management Professional (PMP), offered by the Project Management Institute (PMI), as the gold standard for project managers. PMI published the first edition of the Project Management Body of Knowledge (PMBOK) Guide in 1996. (Yes, we know the first PMP was awarded in 1984.) If your organization was an innovative leader and had PMPs minted just five years after the first formal PMBOK was published, they’d have earned their certification in 2001. In parallel, if the BRMs in your organization were equally innovative, they’ll be BRMP-certified by 2019. Consider your past hiring experience. How many PMPs you have met that were certified in 2001 or earlier? It’s a rare find. Likewise, BRMP is following a similar trajectory. Change agents see the correlation and are educating themselves and their teams.

Defining the role

Everyone wants to be ‘that’ change agent. The BRM leading discussions is closely aligned with providers and business partners and drives these strategic discussions. To achieve success, the BRM must have accountability. Success originates in the organization’s formal BRM roles and responsibilities.

The BRM Institutes defines the BRM role as one that “stimulates, surfaces and shapes business demand for a provider’s products and services and ensures that the potential business value from those products and services is captured, optimized and communicated.” The role is further elaborated into three types of BRMs: connectors (influencing), orchestrators (coordinating), and navigators (facilitating).

You’re likely thinking, “Isn’t this akin to an IT liaison?” Not exactly. This role is much more strategic, and BRMs are empowered by leadership to provide recommendations.

Two areas broadly shape BRM roles: the “House of the BRM” and BRM Competencies.

The house of the BRM has four pillars to support the execution of the role:

  1. Demand shaping: stimulates, surfaces, and shapes business demand
  2. Exploring: identifies and rationalizes demand
  3. Servicing: proactively identifies services and service levels to manage business-partner expectations
  4. Value harvesting: influencing for full value realization

BRM competencies define the skills, traits, and behavior of successful individuals in the role:

  1. Strategic partneringbuilding credibility and partnerships
  2. Business IQ: growing knowledge and understanding of the business partner
  3. Portfolio management: value realization from products, services, interactions, assets, and capabilities
  4. Provider domain: optimization of service management
  5. Powerful communications: conveying intention for mutual understanding of risk and reward
  6. Business transition management: managing process improvements and enabling new business capabilities

20 responsibilities for BRMs

Whether we’re talking about project management, architecture, or human resources, how a role is defined varies widely by organization. The following are twenty responsibilities we’ve found to be critical to BRM success:

  1. Ensure that solutions and services deliver expected business value.
  2. Partner in provider leadership.
  3. Identify and translate business partner needs into strategic roadmaps and executable portfolios of activities.
  4. Define business needs and priorities to inform the strategy for delivering systems capabilities within the business-partner organization.
  5. Translate business needs into effective and improved processes and/or technical solutions or services by coordinating resources from the associated IT Department.
  6. Stimulate, surface, and shape IT demand from business-partner stakeholders, and identify, prioritize, and rationalize demand for business-partner alignment.
  7. Understand the processes, plans, objectives, drivers, and issues related to the business area together with appropriate external policies and regulations.
  8. Contribute the systems aspects of business strategy development, bringing business opportunities through technology and business knowledge.
  9. Keep abreast of technology trends and applicability to business partners.
  10. Participate in industry peer groups to understand industry trends.
  11. Develop strategic roadmaps for information technology systems that align to business-capability enablement or improvement.
  12. Expand adoption of existing technology, where appropriate, to leverage enterprise solutions that meet or exceed business-partner demands.
  13. Develop and socialize realistic IT roadmaps for business partners.
  14. Elaborate business cases and define the realization of business-partner value.
  15. Co-own the business processes in collaboration with business partners, and mature the business process models using industry standards to identify changes: political, economic, social, technological, legal, and environmental.
  16. Play the role of the business-area function representative and subject-matter advisor when required.
  17. Lead and secure adoption of continuous improvement efforts, e.g., Six Sigma, Lean, Kaizen) to transform business partners’ capabilities for fitness-for-purpose and fitness-for-use.
  18. Confirm Service Level Agreements (SLAs) with the business function and ensure that agreed services are being delivered to requirements; analyze and monitor the SLA impact of service changes.
  19. Manage business-partner compliments and complaints to enable continuous improvement.
  20. Proactively advise on technology options and innovation for the business area function.

Why shared ownership matters

Achieving business transformation depends on culture. BRMs shift organizational mindsets from “doing the job” to “achieving the results.” The team dynamics and interactions are the same—the results aren’t. Doing the job means achieving the result.

Organizational alignment demands job clarity, which requires shred ownership to execute effectively. BRM success is linked to organizational BRM role definition. Remove the guesswork for your employees and properly define the BRM role to integrate seamlessly into your company’s culture.

Make change easy to understand. BRMs are the primary business-partner change agent. And the change beings with you.

ARE YOU DEVELOPING YOUR LQ?

You’ve read about building your intelligence (ability to know) and your emotional intelligence (ability to feel) but what about your learning intelligence (ability to learn)?

Now Available!

Peter B. Nichol, empowers organizations to think different for different results. You can follow Peter on Twitter or his personal blog Leaders Need Pancakes or CIO.com. Peter can be reached at pnichol [dot] spamarrest.com.

Peter is the author of Learning Intelligence: Expand Thinking. Absorb Alternative. Unlock Possibilities (2017), which Marshall Goldsmith, author of the New York Times No. 1 bestseller Triggers, calls “a must-read for any leader wanting to compete in the innovation-powered landscape of today.”

Peter also authored The Power of Blockchain for Healthcare: How Blockchain Will Ignite The Future of Healthcare (2017), the first book to explore the vast opportunities for blockchain to transform the patient experience.

Why high-performance business relationship managers embrace learning intelligence

The key to building successful business relationship management practices is hiring for learning intelligence.

Are you racing to figure how this year will be different from last year? How will you deliver more value with less? Where will your teams, departments and organizations discover hidden value? Hint: this value didn’t suddenly appear last December.

As we race into a new year, optimism is high and pragmatism trails. The sexy, but untested, ideas always promise new and better outcomes. We could take that route, as many CIOs have in the past. Sometimes that strategy plays out well; yet often, outcomes prove less than ideal. There may be an alternative approach. That approach is to hire and measure learning intelligence.

Embracing the BRM movement

You’ve undoubtedly heard that business relationship managers stimulate, surface, and shape demand. That’s true—they do. All BRMs do this to a degree, some better than others.

Stimulating demand generates interest and encourages engagement. Surfacing demand helps to identify demand that might not be needed today but will be tomorrow. Shaping demand adds the structure and forms required for value realization later. There’s one critical element that’s implicit in each of these foundational steps: learning intelligence.

Effective BRMs must continue to learn. Learning agility—the speed at which BRMs can flex, adapt, and acquire knowledge through experience, study, or self-teaching—is the single biggest factor in determining BRM performance.

Let’s assume your business area is heavy into SharePoint. You have an open requisition for a new business relationship manager. It’s logical and practical to ensure that the new business relationship manager has strong SharePoint experience. I agree. I’d also ask, How long will that experience last? Are we good for six months, one year, maybe two? After the technology is less important and newer advancements have taken its place, how will discussions between providers and business partners stay productive? They won’t.

Innovative ideas must continually be introduced. In Learning Intelligence, I outlined The Four Dimension of Learning Intelligence.

  • Self-reflection: this capability is developed through introspection and reflection on your actions, motives, and learning behaviors.
  • Self-adaptation: this capability is matured by absorbing traits that allow for flexibility.
  • Learning experience: the identification of how you learn best.
  • Clustering: capturing data, information, and knowledge to form wisdom.

Hire not based on what was done yesterday; hire based on what will be created tomorrow.

  • Think about the individual’s ability to learn.
  • Discover the potential steps to define their ability to learn.
  • Immerse them in an environment to learn.
  • Capture data, information, knowledge, and wisdom for them to learn.

Answer questions before hiring

Onboarding new team members is an exciting time. The new hire is full of energy in anticipation of a new assignment, and the organization is eager to have the role add value.

Onboarding isn’t the time to define the BRM role within your organization. Ask these questions first.

  1. Is the role tactical or strategic?
  2. Do we need heavier provider experience or business partner experience?
  3. Will the role serve as the primary point of contact for the business partner with the provider organization?
  4. To whom does the role functionally report?
  5. To whom does the role operationally report?
  6. Will the BRM co-own the business partner’s processes?
  7. How are business capabilities determined, assessed, and measured?
  8. What is the role of the BRM in provider or business partner decision making?
  9. How will multiple BRMs rank business-partner priorities organizationally?
  10. What’s the BRM role in vendor management and SLA adherence?
  11. Is the role just a liaison between the provider and business partner organizations, or is it more?
  12. What does ‘strategic roadmap’ mean within your organization?

Value realization is attainable when the roles and responsibilities are defined with the necessary organizational buy-in. Start this process before the new hire is wondering where the coffee machine is located.

The purple squirrel with pink socks

A red, gray, or black squirrel you may have seen. Purple squirrels are rare. We need a BRM with specific provider experience, e.g., infrastructure or operations. However, we need to add pink socks to this squirrel, because the BRM must also understand the business-partner side of the equation, e.g., biotech or energy.  Often, once you spot a purple one, you think, “Yes, we’ve found it,” only to discover the squirrel has gone to a competitor that moved faster in the hiring process.

The formal business relationship manager title is optional. Formalities, however, do help solidify the role when communicating it to business partners. Lock in talent with formal role descriptions. There are many BRM models. Similar to organizational models, they work successfully when given the right organizational environment. Business relationship management is a discipline, an organizational role, and a model for organizational change. Be intentional in the capability your organization desires to mature.

What business relationship managers know today improves effectiveness, but it’s what they must learn tomorrow that will distinguish the most impactful BRMs.

ARE YOU DEVELOPING YOUR LQ?

You’ve read about building your intelligence (ability to know) and your emotional intelligence (ability to feel) but what about your learning intelligence (ability to learn)?

Now Available!

Peter B. Nichol, empowers organizations to think different for different results. You can follow Peter on Twitter or his personal blog Leaders Need Pancakes or CIO.com. Peter can be reached at pnichol [dot] spamarrest.com.

Peter is the author of Learning Intelligence: Expand Thinking. Absorb Alternative. Unlock Possibilities (2017), which Marshall Goldsmith, author of the New York Times No. 1 bestseller Triggers, calls “a must-read for any leader wanting to compete in the innovation-powered landscape of today.”

Peter also authored The Power of Blockchain for Healthcare: How Blockchain Will Ignite The Future of Healthcare (2017), the first book to explore the vast opportunities for blockchain to transform the patient experience.

Why swarm intelligence enhances business and Bitcoin

Can intelligence be amplified by thinking together? Ants do it. Birds do it. What about humans? Collective intelligence is the next wave of intelligence. Swarm intelligence connects systems with real-time feedback loops. Individual efforts combine to form a greater value.

Fish school. Birds flock. Bees swarm. A combination of real-time, biological systems blends knowledge, wisdom, opinions and intuition to unify intelligence. There’s no central control unit. These simple agents interact locally, within their environment, and new behaviors emerge.

Swarm intelligence is the self-organization of systems for collective decentralized behavior. Swarm intelligence enables groups to converge and create an independent organism that can do things that individuals can’t do on their own.

Why can’t humans swarm? Fish detect ripples in the water. Birds use motion detected through the flock. Ants leverage chemical traces. Until recently, there’s been little research conducted on “human swarming.” If nature can work together, why can’t humans use similar decision spaces to arrive at a preferred solution? Will the next generation of breakthrough innovation stem from the wisdom of the crowd — swarm intelligence?

Whether we’re talking about nature, humans or robots, swarm intelligence creates a virtual platform to enable distributed engagement from system users. Through this engagement, feedback can be provided in a closed-loop, swarming process.

Individual force for unified objectives

Swarm intelligence draws from biologically inspired algorithms to enhance robotics and mechatronics. Evolutionary optimization is more than ant-colony optimization algorithms (ACO), bee-colony optimization algorithms (BCO) or particle-swarm optimization (PSO). Swarm intelligence can be applied to immune systems, computer vision, navigation, mapping, image processing, artificial neural networks and robotic motion planning.

Bio-inspired systems bring new intelligence to the design of robotics and are used in aerial flying robots, robotic manipulators and underwater vehicles.

The physical, biological and digital worlds benefit immensely by learning from nature. These bio-inspired applications are creating swarm algorithms empowering a newly discovered digital autonomy.

Ants and distributed systems

Technology-based distributed systems are collections of independent computers that appear to work as a unified, coherent system. This same effect is found in swarms. The common element is that control is distributed across individuals or entities and communication isn’t localized.

Why is Bitcoin so fascinating to us? Could it be that the Bitcoin network is a self-organizing, collective intelligence similar to that mesmerizing school of fish?

The collective intelligence, or COIN, framework was first introduced in a paperpublished in 2000 by John Lawson and David Wolpert of NASA’s Ames Research Center.

This framework helped identify — using similar system attributes — where collective intelligence might exist.

  1. Multi-agent system.
  2. No central operator.
  3. No centralized communication.
  4. Unified utility function.
  5. Agents run reinforcement learning algorithms for validation.

Bitcoin is a large version of a multi-agent, reinforcement learning system. The same challenge injected into swarms is inherent in Bitcoin: How are rewards to individuals, agents or entities assigned? The social aspects of swarms are both simple and complex. Group behavior emerges as more significant than individual actions — complexity out of simplicity.

Swarms can solve more than just static problems. Units interact in localized ways and can solve online, offline, stationary, time-varying, centralized, distributed and dynamic problems.

How does a swarm live? How does a swarm communicate? A unique “life” takes shape when a swarm forms, and it has everything to do with spatial intelligence. When observing swarms, we start to notice certain principles:

  1. Work division
  2. Collective behaviors
  3. Navigation
  4. Communication
  5. Self-organization

Social survival

Dinosaurs weren’t social. Ants are social, and they have outlasted dinosaurs and are able to survive in a range of environments and climates. How do ants build their nests? How do ants navigate? Why can ants locate food fast? There’s a one-word explanation: sociality.

The key to human survival isn’t having sophisticated intelligent robots that will floss your teeth while you’re in the shower. The secret is sociality. We must build social systems when we design intelligent systems. There are many examples of nature’s social systems we can draw from:

  • An implausibility of wildebeest: They move through rivers in sheer numbers to avoid crocodiles.
  • A rabble of butterflies: Monarch butterflies migrate to escape the cold North American winters.
  • A rookery of penguins: Emperor penguins converge in a huddle to stay protected from the Antarctic winters.
  • A business of mayflies: Use swarms of 8,000 to attack predators in volume.
  • A plague of locusts: Synchronize their wing beats to make travel more efficient.
  • A shoal of fish: Silver carp leap into the air as a unit to avoid predators.
  • A pod of dolphins: Superpods of dolphins, which can exceed 1,000 individuals, form a pod for protection and hunting.
  • A flight of birds: Budgerigars, a type of parakeet, assemble to act as a unit to make decisions, fend off predator attacks and find food.
  • A cloud of bats: a social vortex of bats forms for communication and to make decisions on foraging.

Nature’s progression and technology’s evolution are amplified with social systems. The end of social abnormalities may be the introduction of swarm intelligence.

Is there a better way to build super-intelligence?

Let’s collect lessons from nature, insights from humans and the unified benefits of intelligent systems and create something smarter than ourselves. These intelligent systems — things smarter than ourselves — appear to think and act. The algorithms, robotics and systems are only a piece of the system we’ll create. Instead of creating and designing complete intelligence systems, maybe we should apply simple rules to form collections of behaviors or swarms.

These swarms could respond by connecting real-time human insights into more intelligent systems with morals, values, emotions and empathy. Swarm intelligence won’t be something you watch on a Ted Talk. Swarm intelligence is going to be a feeling that transcends nature through a collision of the digital and physical worlds.

Tomorrow’s systems will be designed with swarm intelligence and spatial judgment.

Peter B. Nichol, empowers organizations to think different for different results. You can follow Peter on Twitter or his personal blog Leaders Need Pancakes or CIO.com. Peter can be reached at pnichol [dot] spamarrest.com.

Peter is the author of Learning Intelligence: Expand Thinking. Absorb Alternative. Unlock Possibilities (2017), which Marshall Goldsmith, author of the New York Times No. 1 bestseller Triggers, calls “a must-read for any leader wanting to compete in the innovation-powered landscape of today.”

Peter also authored The Power of Blockchain for Healthcare: How Blockchain Will Ignite The Future of Healthcare (2017), the first book to explore the vast opportunities for blockchain to transform the patient experience.

What do cognitive science and swarm intelligence have in common?

The future of artificial intelligence is self-organizing software. Multi-agent coordination and stigmergy will be useful in our quest to discover dynamic environments with decentralized intelligence.

In every field, there’s a pioneer, a prototype, an individual or group that blazed the path forward to uncover previously hidden value. Observing the giants in artificial intelligence allows us to revisit the early instrumental concepts in the development and maturation of the field. Biological principles are the roots of swarm intelligence, and self-organizing collective behavior is its organizing principle. Better understanding these foundational principles results in the ability to accelerate the development of your business applications.

The movers and shakers of artificial intelligence

Four pioneers shaped artificial intelligence as we know it today.

Allen Newell was a researcher in computer science and cognitive psychology at the RAND Corporation and Carnegie Mellon University’s School of Computer Science. His primary contributions to information processing, in collaboration with Herbert A. Simon, were the development of the two early A.I. programs: the Logic Theory Machine (1956) and the General Problem Solver (1957).

Herb Simon was an economist, sociologist, psychologist and computer scientist with specialties in cognitive psychology and cognitive science, among many other fields. He coined the terms bounded rationality and satisficing. Bounded rationality is the idea that when individuals make decisions, their rationality is limited by the tractability of the decision problem, the cognitive limitations of their minds and the time available to make the decision. Satisficing (as opposed to maximizing or optimizing) is a decision-making strategy or cognitive heuristic that entails searching through the available alternatives until an acceptability threshold is met. Simon also proposed the concept of the preferential attachment process, in which, typically, some form of wealth or credit is distributed among individuals or objects according to how much they already have, so that those who are already wealthy receive more than those who are not.

John McCarthy was a computer science and cognitive scientist who coined the term artificial intelligence. His development of the LISP programming language family, which heavily influenced ALGOL, an early set of a programming language developed in the mid-1950s, emphasized the value of timesharing. Timesharing today is more commonly known as multiprogramming or multitasking, where multiple users share computing resources. McCarthy envisioned this interaction in the 1950s, which is nothing short of unbelievable.

Marvin Minsky, a cognitive scientist, was the co-founder of MIT’s artificial intelligence laboratory. In 1963, Minsky invented the head-mounted graphical display that’s widely used today by aviators, gamers, engineers and doctors. He also invented the confocal microscope, an early version of the modern laser scanning microscope.

Together these framers laid the foundation for artificial intelligence as we know it today.

The design for mass collaboration

Do we understand collaboration? Thanks to Kurt Lewin and his research on group dynamics, we understand how groups interact much better than we thought. I ask again, do we understand group interactions? Is there an ideal group size? What’s the best balance of independence? Is the group interaction better or worse when we design in patterns for group activities?

We have defined paradigms of productive and unproductive group interactions. Our challenge comes from the fact that these models don’t scale. It’s also the same reason that the suggested agile team size is seven people plus or minus two team members. As group size increases, so does the complexity in the lines of communication. A team of six people has 15 lines of communication, a team of seven people has 21, and a team of nine people has 36 lines of communication [members in a group produces n(n-1)/2 lines of communication]. Yet, in spite of the problem of the complexity in lines of communication, colonies of ants reaching 306 million workers interact fine as does a mayfly swarm of 8,000 flies. Both groups organized around common goals.

How is this possible if this line of communication principle is absolute? To state it simply, it’s not absolute. We can change the lines of communication by adjusting how the group interacts. This same concept can be applied for swarms of drones and self-organizing software. The limit that prevents us logically from adding agents due to communication complexity — a system we as innovators can simply redesign — is defined by our communications systems.

Psychologist Norman Triplett concluded that bicyclists performed better when riding with others. He found a similar result in the study of children: pairs performed better than solo actors.

Lewin, Lippitt and White later studied what happened to the behavior of young boys (10-11 years old) when an adult male joined the group. The group adopted one of three behavior styles, which the authors named autocratic, democratic and laissez-faire. The results were surprising. The autocratic style worked when the leader merely observed the boys’ behavior. The democratic style worked when the leader wasn’t present with the team. The laissez-faire style was found to be least effective. Does democratic mass collaboration result when the leader is absent?

Group dynamics of biology and computer science

Sociometry is the quantitative study and measurement of relationships within a group of people. Does sociometry apply to swarm interactions?

A swarm is simply a group, right? What if we could design intelligence systems to optimize learning? These systems wouldn’t only exemplify stigmergic environmental properties. They would also build on properties of traditional group dynamics. If you’re in the gym and notice people are staring at you, you’re able to bike a little harder, run a little faster, or lift a little more. What if we could design artificial intelligence systems that would be intelligent enough to embrace these same feelings? Sure, we’re talking less about feeling and more about procedures or rules that we apply in context — but the term “feelings” sounds better to me.

Collective behaviors contribute to solving various complex tasks. These four principles are found in insects that collectively organize. They should also be found in the artificial intelligence systems we create:

  1. Coordination: organizing using time and space to solve a specific problem.
  2. Cooperation: agents or individuals achieve a task that couldn’t be done individually.
  3. Deliberation: multiple mechanisms when a colony or team faces multiple opportunities.
  4. Collaboration: different activities performed simultaneously by individuals or groups.

Whether we’re adding blocks to a blockchain or changing the rights individuals have to shared content, the study of interactions might hold the key to unlock the next generation of artificial intelligence. Before exploring the benefits of dynamic systems and chaos theory, we must apply the principles of artificial intelligence, mass collaboration and group dynamics to expand our knowledge of how systems self-organize.

Peter B. Nichol, empowers organizations to think different for different results. You can follow Peter on Twitter or his personal blog Leaders Need Pancakes or CIO.com. Peter can be reached at pnichol [dot] spamarrest.com.

Peter is the author of Learning Intelligence: Expand Thinking. Absorb Alternative. Unlock Possibilities (2017), which Marshall Goldsmith, author of the New York Times No. 1 bestseller Triggers, calls “a must-read for any leader wanting to compete in the innovation-powered landscape of today.”

Peter also authored The Power of Blockchain for Healthcare: How Blockchain Will Ignite The Future of Healthcare (2017), the first book to explore the vast opportunities for blockchain to transform the patient experience.

When will science create ethical robots?

Is it possible for robots to have a bias? If robots attempt to imitate human behavior, isn’t it possible that they could make unethical choices — just as people can make unethical choices?

Your bed isn’t able to move between rooms automatically with only a wireless phone request. Your toaster can’t make a bottle of water. Your garage door doesn’t wash itself. The bed, the toaster and the garage door each perform a specific function well — the function we need — nothing more nothing less.

But, what if on Monday your bed sensed that you should be at the gym at 9 a.m. and vibrated to force you to get up. Then the toaster didn’t turn on because it decided that you didn’t need those extra carbs in the bagel. It was helping you. And maybe you had been doing a lot of traveling, and the garage door knew that the ergonomics of traveling too much would be bad for your spine, so it didn’t open when you got in your car. Welcome to the intelligence world of smart A.I.

Can A.I. machines, agents and robots be too smart? Just because we could design a machine to be intelligent, doesn’t mean that we should.

Robots attempt to imitate human behavior. Then isn’t it logical that if ethical people can make unethical choices, that ethical robots could make unethical choices?

The moral compass of machines

Humans have morality. These guiding principles help us make the distinction between right and wrong or good and bad behavior. This concept centers around ethics, the philosophy to examine right and wrong moral behavior with ideas such as justice, virtue or duty.

When we think about our car, we might be interested in fuel economy. On reflections of our health, topics like comfort and lifestyle come to mind. And when our thoughts migrate to nature, we may think about natural selection and survival of the fittest.

The pontification of morality and virtue lands us quickly in the world of consequentialism. This doctrine holds that the morality of an action is to be judged solely by its consequences. The actions can have multiple and conflicting outcomes. If we as humans have trouble making these decisions, how are we going to program machines to make them? Utilitarianism could be a solution. We have more than one choice when deciding how we design machine intelligence.

  • Consequentialism: helps determine whether an act is morally right only based on consequences.
  • Actual consequentialism: adds that moral rightness depends on the actual consequences.
  • Direct consequentialism: assesses whether the act is moral based on the act itself.
  • Evaluative consequentialism: shifts the morality to the value of the consequences.
  • Hedonism: an entertaining derivative of action, determines moral rightness based on pleasures and pains of the consequences.
  • Maximizing consequentialism: depends on which of the consequences are best (versus average).
  • Aggregative consequentialism: focuses on moral rightness within function of the values of the parts of those consequences.
  • Total consequentialism: assesses moral rightness based on the total or net good of the consequences.
  • Universal consequentialism: is the assessment of moral rightness for all people involved in the consequences.
  • Equal consideration: determines moral rightness based on an equality of the consequences among the parties involved.
  • Agent-neutrality: moral rightness does not depend on whether the consequences are evaluated from the perspective of the agent or observer; it gives every agent the aim of maximizing utility.

Let’s just quickly program morality into the machine and get on our way. It turns out that programming morality is complex, even before we get to the evaluation of outcomes experienced through machine intelligence or robotic involvement.

Linking machine intelligence to ethical philosophy

Roboethics, or robot ethics, is how we as human beings design, construct and interact with artificially intelligent beings. Roboethics can be loosely categorized into three main areas:

  1. Surveillance: the abiSurveillance: the ability to sense, process and record; access; direct surveillance; sensors and processors; magnified capacity to observe; security, voyeurism and marketing.
  2. Access: new points of access; entrance into previously protected spaces; access information about space (physical, digital, virtual); objects in rooms, not files in a computer, e.g. micro-drones the size of a fly.
  3. Social: new social meaning from interactions with robots that implicate privacy flows; changing the sensation of being observed or evaluated.

Robots do not understand embarrassment. They don’t have fear, and they are tireless and have perfect memories. Designing robots that spy, either on your back porch or while your car is parked, brings into question how surveillance, access and social ethical considerations will be addressed as we further develop algorithms that assist humans.

We’ve heard about machine intelligence agents to enable ubiquitous wireless access to charge our mobile phones autonomously. We’ve fantasized about eating pancakes in bed while robots serve us (or maybe that was just me). There have been a lot of technological advances since George Orwell’s 1984 ramblings about the risk of visible drones patrolling cities. Or we could just reject the Big Brother theory altogether and join the vision of Daniel Solove, where we live in an uncertain world where we don’t know if the information collected is helping or hurting us.

The First Amendment appears like a logical addition. But how do we balance excessive surveillance with progress without violating the First Amendment’s prohibition on the interference with speech and assembly?

As we answer a question, three more rise to the surface.

Where is machine learning being used?

How much sensitivity do we design into machine intelligent beings? How much feeling should we architect into an armed drone? Should the ethical boundaries change if we’re simply designing a robotic vacuum cleaner that could climb walls? Where do we make the line between morality and objectives? You better cook my toast today. But tomorrow, I’m OK if the refrigerator is locked shut because I have exceeded my caloric intake for the day.

Society, ethics and technology will experience the heavy integration of rights and moral divisions over the next 10 years. Who designs the rules, processes and procedures for autonomous agents? This question remains unanswered.

Peter B. Nichol, empowers organizations to think different for different results. You can follow Peter on Twitter or his personal blog Leaders Need Pancakes or CIO.com. Peter can be reached at pnichol [dot] spamarrest.com.

Peter is the author of Learning Intelligence: Expand Thinking. Absorb Alternative. Unlock Possibilities (2017), which Marshall Goldsmith, author of the New York Times No. 1 bestseller Triggers, calls “a must-read for any leader wanting to compete in the innovation-powered landscape of today.”

Peter also authored The Power of Blockchain for Healthcare: How Blockchain Will Ignite The Future of Healthcare (2017), the first book to explore the vast opportunities for blockchain to transform the patient experience.

Why neural networks and deep learning hold the secret to your health

Your daily habits could be interrupted by connected systems enabling access to new processing paradigms. Information processing systems inspired by biological nervous systems may change your diagnosis.

We’re supposed to eat less, work out more and use less salt. These goals rarely materialize into a productive pattern. Our behavior doesn’t change. Even with the incredible amount of information available, we choose not to change.

Artificial neural networks (ANN) have the ability to influence medical diagnoses and change our behavior. Change is more than what you should or shouldn’t do. How you connect data and squeeze out information also impacts our ability to change.

Artificial neural networks forecasting our health

Artificial neural networks have a wide range of uses in science and technology with applications across chemistry, physics and biology. The simulation of neural networks has been used to enhance group tactics for playing soccerfighting crimeaccelerating facial image processing and expanding nanotechnology.

Artificial neural networks can address nonlinear problems by mapping multidimensional data sets into two-dimensional spaces. They “learn” based on input and outputs as information flows through the network. The flow of information changes the structure of the artificial neural network. These networks evolve independently.

Classifications of neural networks

Connecting things adds value. The TV doesn’t do much good without power. It’s helpful to know your iPhone data is backed up with cloud storage. Spending money can get you fed at a restaurant.

Usually, we think of combining physical things to create more value. However, the greatest value gains have nothing to do with physical objects; they have everything to do with combining data to form new information. This information has value and forms the utility of artificial neural networks. They combine data into information we’d otherwise never have created.

The key to understanding artificial neural networks begins by identifying the types of artificial neural network we’re talking about. There are four main classifications of neural networks, within a field where over 50 types exist.

  1. Dynamic neural network: networks that either form or do not form a cycle.
  2. Static neural network: networks with no context memory.
  3. Memory network: networks with context memory.
  4. Other types of networks: networks that operate similar to neuronal (mathematical functions) and synaptic states (linking neurons) with the additional feature that these networks also incorporate the concept of time into their operating model.

The challenges of scientists

Scientists and technologists have long had an interest in neural networks. Cognitive science, parallel processing, control theory, neurophysiology, physics, artificial intelligence and computer science all must merge to form the base of knowledge to design, construct and implement artificial neural networks. This field challenges scientists to address the following problems:

  1. Pattern classification: divide the items into categories and then identify those patterns.
  2. Clustering and categorization: unsupervised pattern classification with no training, by identification of similar patterns.
  3. Functional approximation: the function is tasked to find an estimate or the unknown value through various engineering and modeling techniques.
  4. Prediction and forecasting: uses a time sequence data set predict a sample to help make decisions typically in business or science.
  5. Optimization: identifies a solution given a set of constraints for problems in science, medicine or economics.
  6. Content-addressable memory: the address of memory is the same or separate from the contents and content in memory and can be recalled even by partial input to show context.
  7. Control: the model generates a control input, so the system follows a desired trajectory based on a reference model.

Let’s make these problems more practical. Pattern classification could be used to identify abnormal EEG wave forms or for character recognition, speed recognition or blood cell classification. Clustering and categorization could identify high versus low risk populations based on blood or DNA samples. Functional approximation may help a patient decide to either have surgery or explore nonsurgical options (decision support). Prediction and forecasting, using nonlinear regression computational techniques, could aid in new drug discoveries, identification of regenerative medicine, or determine what effect that shake of salt will have on your lifespan.

Optimization can be used to minimize an objective function, e.g. swarm intelligence or robots working in a team for uniform interactions or movement. Content-addressable memory could be used for vision and pattern recognition, in combination with a learning algorithm to identify how viruses may mutate, helping the early discovery of cures. Control could be used to correct motion control problems in industrial applications, vehicles and surgical robots.

Connectionist models of survival

These models are especially applicable when predicting survival after the diagnosis of a rare disease. Would it matter if you knew you had 20 years left and not two months? It would matter to me.

It’s science that transcends from the cliff of innovation to the plateau of practicality.

Using artificial neural networks, combined with machine learning, medicine survival analysis can be calculated. Diagnosis, treatment-response forecasting and outcome predictions are just a few of the capabilities neural networks have demonstrated. Artificial neural networks may soon help change your behavior.

Peter B. Nichol, empowers organizations to think different for different results. You can follow Peter on Twitter or his personal blog Leaders Need Pancakes or CIO.com. Peter can be reached at pnichol [dot] spamarrest.com.

Peter is the author of Learning Intelligence: Expand Thinking. Absorb Alternative. Unlock Possibilities (2017), which Marshall Goldsmith, author of the New York Times No. 1 bestseller Triggers, calls “a must-read for any leader wanting to compete in the innovation-powered landscape of today.”

Peter also authored The Power of Blockchain for Healthcare: How Blockchain Will Ignite The Future of Healthcare (2017), the first book to explore the vast opportunities for blockchain to transform the patient experience.