BRM Institute Recognizes Peter Nichol as a 2020 Top Business Relationship Manager (BRM)

Peter B. Nichol was recognized as a 2020 Top BRM on February 9, 2020 by the BRM Institute.

Each year’s Top BRM list is revealed during #BRMWeek in February. BRM Institute’s global BRM community recognizes the top BRMs that have achieved success through their BRM efforts, strengthened the global BRM community and BRM discipline, enriched lives through excellence in BRM within their organizations, and contributed to the community on a local, national, and global level.

The BRM Institute’s global BRM community recognizes the top BRMs that have achieved success through their BRM efforts, strengthened the global BRM community and BRM discipline, enriched lives through excellence in BRM within their organizations, and/or contributed to the community on a local, national, and global level.

The 2020 Top BRM awards were evaluated based on the following major criteria:

Overall Impact:

  • Explain the impact the BRM has contributed to others around the globe!
  • Share the outstanding accomplishments the BRM has delivered
  • Highlight the amazing organizational accomplishments the BRM has delivered including notable contributions, improvements, discoveries, how they have demonstrated the BRM Code of Conduct, etc.

Leading with Purpose:

  • Bring more personal purpose in the workplace.
  • Identify the convergence of the personal and organization purpose to lead towards happier individuals, stronger relationships and durable communities.
  • Demonstrate how the BRM satisfied their personal or organizational purpose through their work.

Delivering Value:

  • Articulate the value delivered by the BRM engagement
  • Quantify the value realized through BRM organizational empowerment
  • Explain the impact of the value delivered through the BRM’s efforts.

Peter Nichol is a highly respected BRM as the Director, Research and Development, IT Portfolio Management, at Regeneron Pharmaceuticals, a biotechnology company with a market capitalization of $40 billion.

As a BRMP®, CBRM®, MBRM®, and organizational change leader, he has fully embraced the BRM Institute’s approach for business partner value realization.

Peter led a departmental wide initiative to enable the Resource and Demand Management capability to align demand and capacity. Previously, the Research and Development IT department of 404 employees and contractors had no way to match incoming work from business partners (demand) to the provider’s ability to deliver (capacity). This inability to anticipate, predict, and model demand created enormous resourcing and staffing problems throughout the year as we enabled science to medicine across twenty-three scientific areas.

With strong support from a visionary executive team, Peter partnered and educated six departmental BRMs, twenty-nine program and project managers, and eight functional managers who together enabled the science of getting critical medicine to patients.

___

“Your efforts are solidifying the success and adding to the credibility of BRM as a discipline on a global scale. Being recognized as a top BRM recognizes your impact on people, BRMs, and organizations everywhere. Thank you for making a difference!” — Aaron Barnes, CEO BRM Institute

___

“As a Master BRM, I take enormous pride in shaping the future of the BRM discipline. It’s a rare experience when you’re arm-in-arm with fellow BRM practitioners collaborating to design the foundation of relationship management to be used for generations to come. I would be remiss if I didn’t send out a hearty thank you to my amazing team of leaders that day-over-day are enabling science through technology. Success is always shared. My organization’s executive leadership team and fellow BRMs have always been supportive and open to ideas that position us better for tomorrow but require change today. It’s amazing being part of #OneTeam. A big shout out to all the global leaders who have contributed to advancing the BRM profession this year! I look forward to seeing you all at the BRMConnect in Boston, Massachusetts and the BRMConnect in Amsterdam, Netherlands in 2020!” — Peter B. Nichol

References

BRM Institute. (2020). 2020 Top BRMs. https://brm.institute/2020-top-brms/

Measuring the value of data quality

Data intelligence, integrity, and integration are aspects of every CIO strategy in some fashion. It’s not possible to achieve data insights without first cleaning up your bad data. To do that, you must define the value of that data.

The effect that bad data has on our organization is personal. It’s time we put a value on data and, more specifically, a value on bad data. Let’s fix this together.

Wikibon, a community of open-source, advisory-sharing practitioners, estimates that the worldwide data market will grow at an amazing 11.4% CAGR between 2020 and 2027, reaching $103 billion by 2027. That’s the upside. The downside is that IBM estimates that bad data currently costs the US $3 trillion.

One trillion, three trillion, 10 trillion—that sounds like a lot. But do you understand the impact of $3 trillion? Likely no. However, what you do understand is the personal impact that bad data issues have on your family. Yes, your family.

It’s 7 pm, and you’d told yourself you’d wrap up early for a change, but then that email comes in. It’s from your business partner, who just got around to opening the executive status report or those financials she finally had time to read through. Either way, a quick look at the data, and you already know it’s bad and can’t be real. So much for getting out early.

The real cost of bad data isn’t the cost to reacquire a data set that’s been canned or the cost to implement a new system to replace the old. Every executive knows the real cost is personal: time away from friends and family. This is true both for that person, who’s buffering a wave of irate executives, and for their team, which is triaging the root cause and adding weight to an already lopsided work-life balance.

The business impact of poor data

On the topic of data, the first thing that comes to mind is the dimensions of data. These are concepts that include format, authority, credibility, relevance, readability, usefulness, reputation, involvement, accessibility, completeness, consistency, timeliness, uniqueness, accuracy, validity, lineage, currency, precision, accessibility, or representation. These are valid and, at times, very important. They are not, however, the most important consideration. We need to frame the problem of bad data from the lens of our business partners.

Data should be listed as an asset or liability on the company balance sheet. We don’t consider it a hard asset, and we don’t depreciate purchased data. Until that happens, we need to look at the utility of the data and the value we expect to derive from it.

Let’s start by looking at the categories where poor data would inhibit business success. By viewing data errors within a classifying scheme, we begin to quantify the value of bad data. A simple approach is to use four general categories:

  1. Financial
  2. Confidence
  3. Productivity
  4. Risk

Financial primarily focuses on missed opportunities. This could be in the form of decreased revenues, reductions in cash flow, penalties, fines, or increased operating costs.

Confidence and satisfaction have an internal and external impact. Internal impacts on confidence would include employee engagement, decreased organizational trust, low forecasting confidence, or inconsistent operational or management reporting. External confidence addresses how customers or suppliers feel about delays or impacts to their business. Ultimately, bad data leads to bad or incorrect decisions.

Productivity impacts throughput. Therefore, increased cycle time causes drops in product quality, limited throughput, or increased workloads. These are all aspects that involve outcomes.

Risk and compliance center on leakage. This leakage could be value leakage, investment leakage, competitive leakage, or, more seriously, data that directly affects a regulation, resulting in downstream financial and business impacts.

This framework helps to delineate between big-data issues that affect either “running the business” or “improving the business” and those issues that are inconvenient but largely insignificant.

Categorizing the impact

Before we tackle the business expectations for bad data and how poor data impacts our business, we need to create some subcategories to continue our assessment.

Using the four core categories above, let’s deconstruct these into more specific areas of focus:

Financial

  • Direct operating expense
  • Resource overhead
  • Fees
  • Revenue
  • Systems of production
  • Delivery and transport
  • Supplier services
  • Cost of goods sold
  • Demand management
  • Depreciation
  • Leakage
  • Capitalization

Confidence

  • Forecasting
  • Reporting
  • Decision-making
  • Satisfaction (internal and external)
  • Customer interaction
  • Supplier or collaborator trust

Productivity

  • Workloads
  • Throughput
  • Output quality
  • Staffing optimization
  • Asset optimization
  • Service-level alignment
  • Efficiency
  • Defects
  • Downstream

Risk

  • Financial
  • Legal
  • Market
  • Systems
  • Operational downtime
  • Reputation loss
  • Testing
  • Vulnerability remediation
  • Regulatory, compliance and audit
  • Security

Direct operating expenses involve direct labor and materials. Resource overhead accounts for additional staff to run the business such as recruiting or training personnel. Fees are penalties for mergers and acquisitions or other service charges. Revenue pulls in any missed customer opportunities like customer retention. The cost of goods sold is the standard product design and cost of inventory, etc. Depreciation covers inventory markdowns or decreases in property value. Leakage is mainly financial and involves fraud and collections; however, this can be extended to include value leakage such as the impact of not gaining the efficiency from a system that targets saving 250 team members 20% of their day through automation. Capitalization quantifies the value of equity.

Forecasting is the impact of bad decisions based on financial data such as actual vs. budget or the resource management cost from not proactively staffing. Decision-making is usually event-driven and links the time to make a decision and the quality of the decision. Satisfaction is largely the internal service relationship between providers and consumers; e.g., shadow IT, outsourcing, etc. Supplier or collaborator trust measure the optimized procurement process and the vendor confidence in the provider.

Workloads target an increase in work over a baseline. Throughput measures the volume of outputs, typically in cycles; for example, the time required to analyze a process or time taken to prepare data for input into a process. Output quality is about trust in published information; for example, is the data in the status report trusted by the leaders that receive it, or is that report dismissed due to mistrust? Efficiency looks at avoiding waste in people, processes, or technology. Defects highlights imperfections from the norm; this could be a process, system, or product. Downstream evaluates the next process in the chain and the delays experienced because of upstream data-quality issues.

Financial risk targets the bottom line in either hard or soft losses. Legal risk removes protections or increases exposure. The market could involve competitiveness or the loss of customer goodwill. System risk covers delays in deployments. Testing risk is the loss of functionality due to non-working components being released into production or released with defects. Regulatory and compliance risks often deal with reporting or, more importantly, the data and data quality that’s being officially reported. Security risk—a growing concern—addresses data that impacts internal customers (employees) or external customers (suppliers).

These aspects all align poor data quality with negative impacts on the business. Typically, organizations will log quality issues in a tracking system. This is useful to help quantify the impact or play back the value of the data management organization or the office of the chief data officer.

Ask questions

What makes poor-quality data a critical business problem? If the problems are just inconvenient, action doesn’t need to be  taken. Use these questions to elicit ideas to quantify the impact of bad data in your organization:

Financial

  • Which transformation efforts are on hold; e.g., data lake, analytics?
  • How much time is consumed by cleaning portfolio or financial data?
  • What decisions aren’t being made due to lack of vendor management benchmarking?

Confidence

  • How efficient is the vendor-onboarding cycle?
  • Is there organizational confidence in company-specific data; e.g., patients, genes, products, etc.
  • How easily can suppliers’ or collaborators’ data be validated?

Productivity

  • Where can RPA be used to streamline data processing?
  • How many processes are repetitive and require minimal human intervention?
  • How do we remove waste from our process? What was good yesterday might not be good today.

Risk

  • Do events have lineage? If not, what’s the cost to compile that view on request per event?
  • What’s the cost of a single regulatory misfiling or violation?
  • How can an incorrect risk assessment put the company at risk?

Limitless categories and subcategories can quickly become all-encompassing. Start simple. Use the idea of an N-of-1 or one event. We might be talking about the history of a cell line in biotechnology or the history of changes to portfolio financials. For example, instead of estimating the cost of researching data lineage generally, consider one specific example. Collapse the steps into five core phases and put effort into each. Then identify the people involved at each step. From there, add in a blended rate and multiply the effort of each step by the blended rate. The product is the net cost for an N-of-1 event—the cost of a single event. This methodology is very powerful when addressing bad-data problems.

The flip side

We understand the risk of poor data quality. It’s important to quantify the organizational financial impact of not correcting bad data. Putting in place standard data definitions or organizational data dictionaries can build consistent terminology. Implementing guidance on how data should be reported can help explain permitted values and data that’s inaccurate.

Similar to any other discipline, this process requires training and education. Take time to invest in your department and organization. Your team will thank you for it.

Quantifying the value of your enterprise data science initiative

Unlock the competitive potential for your data. Build a business plan for managing your data-as-as-asset.  Put a value on your data.

Have you built the business case for data-as-an-asset in your organization? Most leaders still have this on their to-do list. It’s not one of those activities you can just crank out in an early-morning brainstorming session on Monday before your leadership team arrives.

Building the business case for data requires self-reflection and collaboration with fellow leaders that have already been there and done that—leaders that have found a way to articulate the value of data to their board. It requires communicating a simple, clear, and rational approach.

Data creates value for your company—or, at least, it should establish a foundation to create and capture value. By starting with the end in mind, you’ll have a better framework for communicating and sharing aspects of your data.

To properly analyze the value of enterprise data, information, knowledge, and wisdom, we need to build the business case for data. This business case has three dimensions:

  1. Cost of data
  2. Value of data
  3. Risk of data

Cost of data

Data is an asset, and it has a cost. Your house, car, boat, and your bigger dreamboat all are assets. To better understand data science and organization-data enablement, we need to reframe how we envision data and how we relate to it.

Inside corporate America, we’ve all heard the phrase, “Spend company money like it’s your own.” The less common parallel of that is, “Maintain company data assets like they’re your own.” As we develop the business case for data science, we quickly hit upon what I call the foundation case for data, i.e., the cost of data. Five principles make up the total cost of data:

  1. Cost to acquire
  2. Cost to use and leverage
  3. Cost to replace
  4. Cost to maintain
  5. Cost of decisions

Similar to your new boat, data-as-an-asset isn’t cheap to acquire. With data, you generally have four acquisition options. First is collecting new data, and often this is the costliest. Second is converting or transforming legacy data. This isn’t a speedy process, but, if done correctly, it can yield useful results. Third is sharing or exchanging data. The sharing of data doesn’t only have to be with new collaborators or business partners. This very well could be accomplished by breaking down internal silos and opening up data sets to new internal partners. Purchasing data is the fourth option. If the desired data set is available, this can be the most economical option in many situations.

Before data is useful, it often needs to be manipulated. Data transformation helps to covert the data from one format to another. Typically, this other format is more useful for enterprise consumption. Reconfiguring the data to account for processes is important, as these workflows can transform or manipulate the data in ways that render the data more valuable or useful. Quality control, validation, and the management of data can make the data more extensible across the enterprise and further aid in decision making.

Often, critical data isn’t replaceable. However, some data the enterprise has acquired can be refreshed from the source. Data can be refreshed by reloading the data or purchasing an updated data set. Loss of data rights (patients revoke consent, for example), corruption of media (supplier impact), or data destruction (flood or another natural event) may serve as the driver to explore the cost of data replacement.

A boat needs its propeller and skeg checked for damage, grease points need to be lubricated, bolts must be retorqued, and the water-pump impeller replaced. Likewise, your data needs routine maintenance. The cost of data can include loading, storing, protecting, formatting, indexing, refreshing, and supporting the data over time. To retain the value of your data asset, it needs maintenance to prevent deterioration.

There’s also an impact or cost associated with decisions based on your enterprise data. What’s the cost to a hospital provider of using the wrong blood type for transfusion during a surgery? What’s the cost of exploring the wrong molecule for a biotechnology company? What’s the cost if a patient’s claim should have been approved, but it was denied by the health-insurance payer? Begin to quantify the types of decisions that are being made with data within your organization and the downstream cost of bad decisions.

Quantifying the cost to acquire, use and leverage, replace, maintain, and make decisions based on data establishes our foundational business case.

Value of data

We can place a value on your boat, and we can also put a value on your data. Establishing a value for your enterprise data, of course, is more complex. However, the same four principles apply:

  1. Time value
  2. Performance value
  3. Integration value
  4. Decision value

Data is most valuable when it’s created, after which it decreases in value over time. Unlike your boat that gradually depreciates over time, the value of data can reach a cliff. Let’s consider the value of the piece of data of knowing the winner of a soccer match. Hours before the match, the value of that data is huge. Yet, one second after the game is over, the value of that piece of data drops to zero.

People drive productivity. A primary business case for data is improved productivity—which means, essentially, making existing processes more efficient through the optimization of those processes using data. It could be as simple as saving people time in the day, or it could be as complex as shifting from low-quality, low-value work to high-quality and high-value work. One example is a biotechnology company saving scientists time. By freeing up scientists from doing mundane data entry, they have more opportunities to perform additional experiments. Without the added benefit of the data-entry time savings, the scientists would need to delay performing additional experiments.

By integrating our data, we can improve data relevance and applicability. Data integration allows us to pull from all relevant data sources and, from there, we can see overall trends among them. Helpdesk data can be rolled up from sites to regions to show spreading regional-usage patterns. A decrease in usage by the field staff of critical business systems can be identified and cross segmented by years on the job to better understand if more experienced field staff have greater internal product adoption and usage. Also, data lineage is enhanced by combining data. Data lineage is the lifecycle that includes the data’s origins and where it moves over time.

The greatest value of data is its ability to be used to make insightful decisions. Everyone today wants to foster a data-driven culture, make data-driven decisions, and be perceived as a data-driven company. What’s your data worth?

Tableau, a data-visualization software company, was purchased by Salesforce for $15 billion in August 2019. CANVAS Technology, a robotics company focused on autonomous delivery of goods through AI, was purchased by Amazon for an undisclosed amount in April 2019. Data Artisans, a large-scale streaming company, was purchased by Alibaba for $103 million in January 2019. Data included in these acquisitions might not have been as significant as that obtained when Microsoft acquired Linked In for $26.2 billion in 2016, but it’s highly relevant. Each of these companies builds core technologies that have value. Don’t kid yourself. Their data was a big part of that decision to acquire and contributed to the value of the company.

Risk of data

If companies are valued on their information portfolios, what’s the financial impact if your enterprise loses its data? It’s almost incalculable.

Assessing the risk isn’t as straightforward as determining the cost of data or extrapolating on its value. Data risk is more nebulous. Generally, there are three principles for assessing risk:

  1. Improper-use risk
  2. Regulatory risk
  3. Bad-decision risk

Data misuse is the inappropriate use of data. GDPR does a good job of using corporate controls to protect personal data. GDPR has identified four concepts of improper use of data: the intrusion of solitude, public disclosure of private facts, false light, and appropriation. These categories extend the modern tort law concept of invasion of privacy. This invasion could be personal, public, or corporate.

Regulatory risk varies by industry. Pharmaceutical companies that produce drugs must comply with Title 21 for reporting adverse events. Banks issuing loans are governed by the Housing and Economic Recovery Act of 2008. Utilities are regulated by the Public Utility Regulatory Policies Act of 1978. Each industry can assess risk based on non-compliance, fines, and brand harm.

A local imaging center mixes up a series of MRI scans for two patients. Patient A now has a clean bill of health. Patient B has a metastatic cervical tumor. What’s the improper-use risk of mixing up the patients’ CDs? We have the cancer mix-up case on one end of the spectrum and not approving a corporate expense report because of a receipt mix-up on the other. Regardless, making bad decisions with data is possible, and we need controls to limit these occurrences.

A good case for data management

Your Uber didn’t show up. You need a plan b. While waiting for the bus, you sit down on a bus-stop bench and find a lotto ticket with a Post-It note attached that says, “I’m a winner!” Are you rich? Maybe, but likely you’ll be at work tomorrow just like you were last week. How much value did you apply to that ticket? How valuable is that piece of data? Probably not too valuable.

The data in your enterprise is an asset. Manage it as an asset. Soon your organization will be making data-driven decisions, improving operating efficiencies, and expanding economic impact. This might even lead to you getting that dream boat.

Applying cognitive science to champion data science adoption

Business relationship managers today have new techniques to make data science stickier. Mix it up for greater data-enablement adoption.

The organization knows that data is the future. Data is required for making the best decisions. Data-driven organizations are more profitable. As a result, they can give back more socially by leveraging data to develop better insights. Then why is it that in our last meeting, data wasn’t used to make decisions? Because change is tough.

Great CIOs serve as evangelists for technology and innovation by identifying new, untapped opportunities to enable business objectives and leapfrog the competition.  We can’t, however, do that alone.

The role of the business relationship manager (BRM) has exploded over the last twenty years. The BRM has always been critical for successful convergence between IT visionaries and business partners, but it was only recently, in 2013, when Aaron Barnes and Vaughan Merlyn started the Business Relationship Institute (BRM Institute), that the concept of the BRM as a champion of our business partners started to take off.

BRMs are positioned to be the champion for data science and enablement initiatives as well. We, as CIOs, need to empower BRMS, and we also need them to think differently.

Tilting the classic lens for change

What if we’re leading change all wrong? The book “Make it Stick: The Science of Successful Learning,” by Peter C. Brown, Henry L. Roediger III and Mark A. McDaniel highlights stories and techniques based on a decade of collaboration among eleven cognitive psychologists. The authors claim that we’re doing it all wrong. For example, we attempt to solve the problem before learning the techniques to do so successfully. Using the right techniques is one of the concepts that the authors suggest makes learning stickier.

Rolling out data-management initiatives is complex and usually involves a cross-functional maze of communications, processes, technologies, and players. Our usual approach is to push information onto our business partners. Why? Well, of course, we know best. What if we changed that approach? This would be uncomfortable, but we are talking about getting other people to change, so maybe we should start with ourselves.

Business relationship managers stimulate, surface, and shape demand. They’re evangelists for IT and building organizational convergence to deliver greater value. There’s one primary method to accomplish this: collaboration.

The BRM should start with a series of data workshops with specific data-management problems to solve. Frame the data-management problems for the leadership teams into four categories:

  1. Data requirements
  2. Data-use cases
  3. Data modeling
  4. Data implementation

These categories will offer a good bench from which to develop questions that business partners can validate from a scientific perspective. They’re building knowledge so they can ideate around existing problems to discover new opportunities.

Interleaving concepts to create texture and knowledge depth

The BRM is tasked with increasing awareness of data-management practices such as acquisition, cleansing, and modeling or with data principles like data independence, integrity, and consistency. In either case, the information is often presented in chunks or concepts that build. As it turns out, this isn’t a great way to communicate a new concept.

Interleaving is a learning concept that describes the process of students mixing, or interleaving, multiple topics while they study to improve their learning. However, blocked practice is what’s classically taught—study one concept, master that, and then—and only then—move on to the next. It’s been proven that learning retention using the interleaving method lasts months, not days. Studying related skills in parallel improves retention.

The classical building approach is AAABBBCCC. First, we teach about AAA. Second, we teach about BBB. Third we teach about CCC. The problem is that, by the time we get to BBB, the concept is so boring we’ve already lost people. Interestingly enough, it’s not that the data-management concepts are too complex but rather the opposite—they’re straightforward and make sense.

Interleaving involves using the ABCABCABC approach. First, we cover each of the three ABC concepts. Second, we cover the ABC concepts again using different examples. Third, we cover the same concepts again, only this time use other data and examples.

Applying this methodology to data science , the BRM exposes business partners to multiple versions of a problem, which changes the problem and complexity. Wait, wouldn’t that confuse folks? Yes, you’d think it would. However, as it turns out, we’re holding their interest for longer and, as a result, they internalize the concepts better. We’re no longer pushing concepts. Our business partners are pulling them from us.

Fluency isn’t the same as understanding

Speaking of data science, transformational change isn’t the same as executing on it. Be mindful of those players in your organization that have a lot to say about data science. They might be fluent in the language of data, yet, somehow, they still don’t get it. They have no history of executing and delivering data initiatives.

To be creative, we need a better understanding of the problem space in which we’re trying to find a solution. Being creative and being knowledgeable are both necessary. It’s difficult to be creative and present solutions to problems without the knowledge or a foundational understanding of the concepts.

Lean on the business relationship managers within your organization to champion change. Challenge them to teach the concepts of data science differently. By shifting from pushing information onto your business partners to having information pulled, you’ll change the conversation from, “Here’s some data you’ll find useful,” to “Where can I learn more about this data concept?”

CIOs are the evangelists for innovation. BRMs are the champions of change. To make your data science initiative sticky, you need both roles to think differently to enable continuous value delivery. How about starting from the concept that learning about data science can be fun? It’s not as crazy as it sounds. All you need is a little creativity.

Design success into the office of the CDO

Every obstacle, hurdle and misstep raises awareness and decreases the likelihood of a recurring event. Use experience and wisdom to avoid the mistakes of others and find success when designing and implementing an office of the CDO.

Your office of the CDO needs a vision. Success won’t sprout from a rock. Leveraging data as a strategic asset can’t be done without defining a strategic approach to that data. Building a data strategy doesn’t prevent you from being agile in your approach. At the beginning, your vision of the organization’s data strategy might be fuzzy. That’s okay.

As you develop your data vision, provide guidance on how to unify business and IT perspectives, and promote value metrics from a data-driven culture, remember that change is welcome. If you’re headed down a path and not garnering the necessary organizational buy-in, change your course. There’s a better path forward. You just need to discover it—without going through the difficult journey that Milton Hershey did.

4 failures to success

The sweetest city in the world is Hershey, Pennsylvania. However, it didn’t start out sweet for Milton Hershey, an American chocolatier, businessman, and philanthropist. Milton didn’t see much use for school and only had a fourth-grade education. At the age of 14, he started an apprenticeship for a local printer. That was short-lived, and he was later fired for dropping his straw hat into a machine. He quickly was paired in another apprenticeship with a confectioner named Joseph Royer, based in Lancaster, Pennsylvania.

During his four years with Royer, Hershey learned everything he could about the candy business. Then, at the age of nineteen, he moved to Philadelphia to start his confectionery company. Unfortunately, he couldn’t make it in Philadelphia because of heavy supplier debts, so he moved to Denver, New York, Chicago, and eventually New Orleans, but never made a success of his business in any of them.

Throughout his journey from location to location and with each failure, Hershey was learning. He discovered that fresh milk is vital to good candy. In 1886, at the age of twenty-nine, he was penniless. Eight years later, he sold Lancaster Caramel Company for $1 million and turned to chocolate, where he soon founded the Hershey Chocolate Company in what would become Hershey, Pennsylvania.

Hershey became a successful businessman. When he died, he signed over all his company shares of the Hershey Chocolate Company, via a trust, to the Hershey Industrial School, which was an orphanage he founded. The shares were valued at $60 million. For reference, that same year, Coca-Cola sold for $25 million.

It’s fascinating what an individual can do with a vision and passion. Milton was famous for saying, “The caramel business is a fad.” At the time, he sold Lancaster Caramel Company, profits were at all-time highs. Yet, he sold his business and went into chocolate. Understand what makes your business successful.

4 pillars of success

To discover your company’s data-strategy vision, build around the four core aspects of the office of the CDO. These require careful set-up to enable your successful data office initiative:

  1. Governance
  2. Standards
  3. Architecture
  4. Talent and culture

Governance defines the process, establishes forums, and promotes strategic communication. Architecture creates the guardrails for reference architecture, identifies common lexicons, and develops an edge for the data platform. Standards elaborates operating procedures, specifies technical standards, and identifies design precepts. Talent and culture span education and training, skills, roles, and responsibilities of agile teams—which are the human aspect of change.

These four areas are the pillars of a successful office of the CDO.

Adjusting how we look at opportunities—including data—doesn’t happen overnight. We’re transforming an organization. To do this, we must lean on these four core pillars of a successful office of the CDO, which we’ll now discuss in detail.

Data governance: discovering better insights

Governance offers accountability for data, business agility, better compliance, IT agility, and stronger insights. Setting up seamless data governance facilitates stakeholder interactions and makes the decision-making process easy. Without this framework, decisions will spin, and participants will become frustrated that engagement isn’t consistent or uniform across business areas or divisions.

There are six areas of interest when we’re talking about data governance and the role of the office of the CDO:

  1. Enterprise data governance
  2. Data-quality management
  3. Master-data management
  4. Metadata management
  5. Data-protection management
  6. Data strategy and diagnostics

Enterprise data governance is the management of business data. This includes the overall management of the availability, usability, integrity, and security of enterprise data. Data quality management is a set of practices that aims at generating and maintaining high-quality data used for decision-making. The quality measures ensure that, throughout its lifecycle—from acquisition through distribution—the data is fit for use. Master-data management is a method for an enterprise to link critical data to a common point of reference. This discipline converges IT and business partners to ensure the uniformity, accuracy, stewardship, semantic consistency, and accountability of master-data assets. Metadata management is the administration of data that describes other data. Metadata involves any information that can be integrated, accessed, shared, linked, analyzed, and maintained. This organizational agreement describes enterprise information assets. Data-protection management enables data security across the enterprise—including automation, orchestration, and document management—to control the many data-protection activities required to run an enterprise. Data strategy and diagnostics is a guide for optimizing data, removing redundant data, and simplifying the lifecycle management of data.

There are many more areas we could add to this mix including data modeling and design, data integration and interoperability, documents and control, data storage and operations… the list goes on. However, we want to start with the basics.

Together, these six elements build the foundation of a robust, data-governance program.

Data standards: sharing data we trust

Data standards communicates enterprise data-sharing frameworks. The objective is to improve trust. We can measure the trust of the data by measuring the credibility, reliability, intimacy, and self-orientation of data.

Addressing trust gets us to the primary value of establishing a standard for data sharing. Who should participate? How transparent is the data? Who can share data? How do we remove misaligned interests?

Producers and consumers of enterprise data must meet baseline standards. Additionally, trust frameworks must be tailored to producer and consumer needs. Combined, this approach raises the level of trust and decreases the risk associated with data production and consumption.

Standards for data sharing describes the ways data can be shared. It also highlights how data can be restricted or access to the data can be increased by removing restrictions in the following sequence:

  1. No awareness of the data set
  2. Awareness of the data set
  3. Awareness of data scope and data dictionary
  4. Query highly aggregated, obfuscated, or perturbed data
  5. Query lightly aggregated, obfuscated, or perturbed data
  6. Access aggregated, obfuscated, or perturbed data
  7. Access to data
  8. Ability to share data

No awareness of the data set means the existence of the data set isn’t known. Awareness of the data set makes knowledge of the data set known. Awareness of data scope and data dictionary increases knowledge to include the scope and parameters of a data set—for example, knowledge or access to the data dictionary. Query highly aggregated, obfuscated, or perturbed data enables queries on data sets, but these are highly restricted—in this case, access to a division’s or a department’s data might be removed. Query lightly aggregated, obfuscated, or perturbed data slightly widens the endpoint of data access. For example, in this case, previous access may have been to a state-based population, and this opens access to multi-state searching. Access aggregated, obfuscated, or perturbed data enables the ability to run defined logical operations and pull de-identified data. This level of access provides access to an aggregated data set; however, access to the raw data set isn’t permitted. Access to data unlocks the technical restrictions of operations that may be performed with the data, although specific access rights are usually restricted to certain individuals. The ability to share data may allow sharing to one consumer but may restrict that consumer from sharing the data with another consumer.

Data trust is validated by the enterprise standards in place to share data.

Data architecture: integrating data investments with business strategy

Data architecture guides the integration of enterprise data assets. Architecture is focused on the abstraction of the system, not the system itself. As complexity within your enterprise increases, the need and value provided by data architecture become ever more valuable.

Data architecture has a broad reach and includes many components to build a single version of the corporate “truth.” These ten pieces of enterprise data architecture will ensure the following enterprise data assets are integrated:

  1. Business entities
  2. Business relationships
  3. Data attributes
  4. Business definitions
  5. Taxonomies
  6. Conceptual and logical views
  7. Business glossary
  8. Entity lifecycle and states
  9. Reference-data values
  10. Data-quality rules

Business entities refer to the various components of the business. Business relationships identify how those entities interact and ultimately share data. Data attributes tag and identify data elements using known classifications. Business definitions clarify the intent behind the data sets. Taxonomies establish schemes and classification system for groups or attributes with similarities. Conceptual and logical views identify a high-level relationship between entities and how the data is physically represented in the database. The business glossary defines terms across domains and serves as the authoritative source for the data dictionary. The entity lifecycle and states specify where the entity is in its lifecycle from acquisition to destruction. Reference-data values are standards that can be used by other data fields. Data-quality rules are the requirements that the business sets on its data.

It’s also useful to conduct an information value-chain analysis. This process identifies matrix relationships among data, processes, organizations, roles, locations, objectives, applications, projects, and data platforms.

Talent and culture: moving people not data

When we think of data, we often think of the bits and bytes. We should be thinking about people. Talent and culture are the most difficult aspect to get right when developing a successful office of the CDO.

There are seven main areas in driving culture and getting the right folks in the right roles:

  1. Talent acquisition
  2. Performance management
  3. Competency management
  4. Learning and development
  5. Leadership development
  6. Career management
  7. Succession management

Talent acquisition is the process of finding and onboarding skilled data talent. Performance management ensures that individual activities and outputs meet organizational goals. Competency management is the process of developing the skill sets of individuals. Learning and development attempt to enhance individual performance by tuning and honing skills and knowledge. Leadership development helps to expand an individual’s capability to grow within the organization. Career management is the deliberate planning and coaching of an individual’s activities, engagements, and jobs over a lifetime. Succession management is the systematic process of developing, identifying, and grooming high-potential individuals for more aspirational roles.

By developing a talent management plan, we improve our odds of attracting, developing, motivating, and retaining high-performing employees.

Designing a world-class office of the CDO

Designing a world-class office of the CDO begins with a vision and a data strategy. We establish a strong organizational base for scalability and growth by using the pillars of governance, standards, architecture, and talent and culture.

Take time to understand the intrinsic value of data. Is your data accurate and complete? Determine the cost value of data. If you lost your data, what would it cost to replace it fully? Evaluate the business value of the data. How fit-for-purpose is this data to make data-driven decisions? Measure the performance value of your data. What parts of the data fuel key business drivers?

Designing the office of the CDO is an exciting process. Let’s hope your venture into the data business is less bumpy than Milton Hershey’s entrance into the chocolate business.

Assembling the right resources for the office of the chief data officer

Creating an office of the chief data officer is the first step in developing a data-driven culture and maximum business value.

We’ve come a long way from the first website, which was published on August 6, 1991. The Internet has over 1.94 billion websites. Over seven billion search queries a day are conducted worldwide, and over 15% of those are entered into a search box for the first time. Data is transforming how we do business and, more importantly, how we make business decisions. However, 51.8% of the traffic is solely from machine bots; the remaining 48.2% is from human traffic.

From this ongoing surge of data has emerged the chief data officer role—and, more recently, to support that role, the office of the chief data officer.

Establishing the right structure can have a positive impact on organizational transformations to drive a data-driven culture. Let’s address four questions that clarify the value of the office of the CDO:

  • What’s its purpose?
  • What are the primary office functions?
  • What resources and skills are required?
  • What are the major duties of the office?

The purpose

The CDO is an executive responsible for enabling and championing value creation for the organization through the use of data assets internally and externally. This includes governance, planning, definition, capture, usage of, and access to data and information. Generally, the CDO has accountability in three areas: data management, analytics and technology.

  • Data management captures the care protection and governance of data from establishing a strategy for designing the implementation policies for governance.
  • Analytics includes any capabilities required to analyze data to transform it into useful insights.
  • Technology covers the data architecture, infrastructure, and services for the ingestion, movement, monitoring, and storage of data.

The CDO is accountable for capturing high-quality and timely data and leveraging data assets to stakeholders. To fulfill this mission, we need to understand the purpose of this role. The role of the office of the CDO is simple: create value from data. To frame the context of the role, we’ll dive into its functions.

The functions

There are 100 ways to build a good data office, but there are only a handful of ways to build a great team. The office of the CDO needs to envision, prototype, evangelize, implement, and support existing and new data platforms. There are two broad paths that organizations can take here.

The first path is to have the office of the CDO run IT data operations. This means the CDO assumes responsibility for all database administrators and any resources that support the creation or maintenance of data assets. This could be in the form of custom systems, SaaS solutions or off-the-shelf solutions. The benefit of this approach is that the data increases in value while redundancy and cost decrease. The flip side is that day-to-day operational activities limit the focus to approaches geared to developing strategic data assets.
 
The second path is to have the office of the CDO run the IT “asset” operation. Here, we’re specifically talking about managing existing data assets and leveraging new ones. The benefit of this approach is it facilitates greater collaboration and the ability to share data assets. The disadvantage is the lack of raw-data ownership, budget limitations, and the requirement of additional, cross-functional buy-in before significant transformation can occur. Sometimes this buy-in doesn’t occur, which stifles progressive ideas that push the boundary of normal.

The resources

The resource makeup of the office of the CDO varies greatly based on employees and annual revenue so that this approach can take a number of forms. However, some common themes are observed. The variability is that one company might need one of a particular resource and another might need 100. Use your judgment to scale the primary functions based on your business demand.

Next, we’ll cover the following primary roles and the skills required:

  • Chief data officer
  • Data scientist
  • Data modeler
  • Data architect
  • Data analyst
  • Front-end designer/developer
  • Database administrator
  • Portfolio manager
  • Project manager
  • Business relationship manager

Chief data officers provide leadership on maximizing the value of data assets enterprise-wide. This role is responsible for leading the transformational change to position the organization so it’s data-driven. Driving the use of the right data at the right time, creating a data-driven culture, and leading analytics are vital. However, the most important aspect of the role is establishing and fostering organizational buy-in for the office of the CDO function as well as the future role data will have in the organization. Few leaders will argue that data is transforming business decisions and that business models are changing; the challenge is that those same leaders might not believe that your office of the CDO is the right team to do that. This is why establishing collaborations and building trust outside of IT is essential.

Data scientists help to identify opportunities to improve organizational outcomes by utilizing data, developing predictive models, and sharing stories that present new insights. There are seven major areas of significance to data scientists: data collection (web scraping, HTML, CSS), data ingestion (SQL APIs, JSON, XML), data cleansing (multiple data types), data visualizations (D3, Tableau, Spotfire), basic analysis (R, Python), data mining (variance analysis, measuring bias, feature normalization, feature selection, feature extraction, clustering analysis, association analysis) and predictive modeling (data modeler+, graph analysis, bootstrap or bagging modeling, ensemble models, Bayesian analysis, neural networks, deep learning). An effective data scientist can apply sample and survey methods, determine statistical significance, conduct outlier analysis and make data-driven decisions to identify new data-science opportunities previously undiscovered.

Data modelers use a variety of data types to build and design predictive models. To understand sampling methods and measure statistical significance, data modelers need to have much of the experience of data scientists. For example, data visualization, basic analysis, data mining and predicting models are key skills for this role.

Data architects develop linkages between systems. They need to have experience with multi-architectures and implementing complex database policies and standards. This background allows them to develop complete solutions to validate, clean-up and map data. Ensuring end-to-end data quality requires integrating data from unrelated sources. Having internal knowledge of the organization’s domains is a crucial element.

Data analysts facilitate data collection and aid in data cleansing with primitive analysis skills. Often this role is the initial drafter of organizational policies, standards, and procedures before more experienced resources assume ownership. These resources likely are familiar with R, Excel, and SQL at a high level but hit limits quickly when applying this to SQL APIs, JSON or XML applications.

Front-end designers and developers mainly focus on client-side development using technologies such as HTML, CSS, JavaScript, jQuery and RESTful service APIs. This code is executed inside the user’s browser and can extend into the UI/UX experience for users.

Database administrators specialize in software to store and organize data. Usually, this role includes capacity planning, installations, configuration, database design, data migration, performance monitoring of data, security, backup and recovery, and basic troubleshooting. This role is hands-on regarding data and, as a result, needs to be carefully managed with segregation of duties.

Portfolio managers focus on value realization from products, services, interactions, assets, and capabilities. This includes making investment decisions to balance objectives, asset allocation, and risk for optimal performance. This role aligns strategy with the bottom line to optimize delivery orchestration across the data portfolio of investments, projects, programs or activities.

Project managers lead data-related project initiatives and provide contract support to align with corporate policies. These resources work with multidisciplinary teams like legal, cloud, finance, operations and various business functions to lead projects and get them over the finish line.

Business relationship managers stimulate, surface, and shape business demand to define the full business value envisioned. This involves building credibility for the office of the CDO, establishing partnerships outside of IT to increase awareness of existing capabilities in house, and introducing new data capabilities that have force-multiplier effects for business partners.

Likely there are dozens of resources that could be pulled into a CDO team to align to organizational needs. The foremost that comes to mind are subject-matter data experts that have specific and deep domain knowledge of how your business operates.

Now that you know the critical roles to establish the office of the CDO, spend your time finding the best resources to staff your office. These resources are in high demand, so you must assume it will take longer than planned to recruit the team.

The duties

The primary responsibilities of the office of the CDO used to be focused on data governance, data quality, and compliance drivers. Today, the focus of this office is to enable a data-driven culture and maximum business value.

To exploit data to achieve a competitive advantage and establish the office as a strategic advisor, the responsibilities need to be communicated across the organization.

Leading change and championing a data-driven culture can be enabled with the following defined responsibilities:

  • Envision, design, and communicate a collaborative, enterprise-wide data strategy.
  • Establish a governance structure for managing data assets using a repeatable process and standardized frameworks.
  • Define, implement, and manage organizational data principles, data policies, data standards, and data guidelines.
  • Decrease the cost of collecting, managing, and sharing data while increasing the value.
  • Enable data-as-a-service for enterprise-wide adoption using a data-service strategy.
  • Develop data-quality measures and practices to improve organizational trust in data.
  • Manage the data portfolio to coordinate the investment prioritization of enterprise-wide data initiatives.
  • Identify opportunities for the organization to more fully leverage data for a strategic advantage.
  • Champion organizational change management for a data-driven culture.
  • Advance how enterprise-wide data assets are managed to provide deeper insights.
  • Establish policies and programs for data stewardship and custodianship for stakeholder engagement.
The office of the CDO gets the business of data onto the minds of your organization’s executives. It’s the first major step toward developing a data-driven culture. Data enablement is a change that requires shifting organizational strategies, processes, procedures, technologies and culture. Use these four tips when introducing organization-wide change, transformational change, personnel change, unplanned change or remedial change: Make it clear. Make it real. Make it happen. Make it stick.

Why RPA is a CIO priority

Cognitive automation technologies are changing our business. RPA is the first step in that evolution. Be part of the business-value realization with RPA.

Robotic process automation (RPA) is the game-changer your organization doesn’t know about. There are only a few leaders in your organization who fully appreciate the potential of RPA. The hype about RPA reminds me of the hype about the Internet in the mid-1990s. We know it’s going to take off, but we don’t know where or how this idea of knowledge-sharing will be adopted.

RPA applies AI and machine-learning capabilities to perform a repeatable task that previously required humans to perform.

Similar to the concept of a blockchain, a large part of the slow adoption of this technology is related to education. Once you’ve internalized the power of RPA, you’ll quickly apply RPA-type concepts throughout your organization.

RPA isn’t a physical robot. It won’t deliver your FedEx package with a smile. It’s also not going to delivery your Amazon package in an air taxi on Sunday. The beauty of RPA is that it can automate activities based on rules and relieve your team of the burden of performing manual processes. Processes that are manual, repetitive, and have high error rates are where RPA excels.

RPA does three things well. It reduces cost, improves quality, and improves operational controls. It doesn’t matter whether you’re using Blue Prism, WorkFusion, Kryon Systems, UiPath, Automation Anywhere, or NICE. Each of these tools can help you realize better business outcomes.

Let me guess. You need to improve business outcomes quantifiably. You’re searching for that 10x game-changer for next year. You already promised business leaders some magic, and you have no idea where that magic powder will come from. Not to fear.

RPA has some fascinating applications for the next-generation CIO.

RPA for advanced analytics

Building a data lake? RPA can help. Starting a data-enablement initiative? RPA can help. RPA streamlines and automates time-consuming, high-volume, and repetitive activities. Big data requires data aggregation, curation, data cleansing, normalization, data wrangling, and tagging of metadata.

RPA offers amazing benefits to enable advanced analytics:

  • Removing the need to rekey data sets manually
  • Migration of data
  • Data validation
  • Producing accurate reports from your data
  • Foundation for an action framework: “good to know” (within tolerances), “interesting” (better than expected) or “need to act” (action required)
  • Aggregate social media statistics
  • Process mining technology to visualize the actual process
  • Ingestion from acquired sources
  • Link systems to systems (APIs)
  • Data rules for accuracy, consistency, validity, timeliness, and accessibility
  • Data deduplication
  • Scrape data from websites
  • Performing vendor master file updates
  • Data extraction
  • Advanced-processing algorithms
  • Formatting

RPA can handle even the most complex environments. If you’re able to record and play the activities, RPA can be a welcome operational improvement.

RPA for business-process waste removal

Data integration is the initiative that never gets finished. Somewhere along that last mile to fully automating the integration of those systems, there’s either no budget available or no interest. RPA can pick up and connect that last mile, removing waste in the process. RPA will be leading the next wave of increased productivity, and it can help tackle the eight major types of transaction-processing waste:

  • Defects; e.g., highlighting missed deadlines or overspend
  • Overproduction; e.g., extending reporting based on the severity
  • Waiting; e.g., waiting for approvals
  • Non-utilized talent; e.g., issuing and tagging training to employees when necessary based on events
  • Transportation; e.g., facilitating handoff between functions—like when an approval system isn’t talking to the PO and invoicing system
  • Inventory; e.g., processing data for entry into a larger system
  • Motion; e.g., removing repetitive keystrokes when switching between applications
  • Extra processing; e.g., formatting reports, adding details.

RPA is disrupting digital transformation and operational excellence. RPA’s fast and inexpensive approach to automation saves labor, extends capacity, increases speed, and improves accuracy.

RPA for project, program, and portfolio management

Haven’t heard of RPA and project management in the same sentence? News flash: RPA isn’t going to replace the human need for project managers.

Managing project budgets, monitoring risks, and balancing resource capacity all are functions central to the role of program and project managers. RPA can freshen up the definition of the standard of what’s deemed “good” when it comes to IT portfolio management. There are multiple ways in which automation minimizes risks and can streamline portfolio management activities. Here are the big hitters:

  • Create multi-thread, digital approvals for statements of work
  • Generate contracts using the company’s “gold standard”
  • Automate the creation and distribution of portfolio reports
  • Generate documents
  • Push communication of project variances
  • Balance resources; e.g., reporting on utilization and reallocating resources
  • Reduce the dependence on spreadsheets to manage information
  • Answer the question, Are we on track?
  • Collect and disseminate project-specific information
  • Screen, filter, and track candidates for the recruitment process
  • Create financial-scenario modeling based on thresholds
  • Automate data ingestion for dashboards; e.g., PowerBI or Clarity PPM
  • Provide sensors to continuously identify progress wins and capture value delivered
  • Forecast based on historical data
  • Assure PMO policy adherence; e.g., process documentation and project audits
  • Automate project and program SDLC process-step progression

RPA can play an important role in your IT portfolio ecosystem. The short duration (1-2 months) and low investment cost ($50-100k) makes an RPA pilot an easy win for your organization. RPA makes quantifying improvements easy. This metric-driven approach simplifies business-partner discussions when outcomes are immediate and visible.

RPA for IT asset management

Do you know when your licenses are due for renewal?  RPA and AI are going to transform IT asset management (ITAM). The nature of IT asset management is repetitive and standard. This taps directly into the sweet spot for RPA. Several applications exist for RPA within IT asset management. Here are the most impactful:

  • Automate software audits
  • Compare licenses purchased to licenses contracted
  • Manage source-code control
  • Oversee vendor and resource on-boarding and off-boarding; e.g., delete domain users or modify distribution lists
  • Provide reporting and analysis
  • Manage incident resolution; e.g., server restarts, password resets, etc.
  • Self-heal; e.g., system health checks, automating backups, and event monitoring
  • Automate fulfillment processes; e.g., IT asset requests

RPA won’t fix your broken workflows, but it can help automate them to ensure human errors are removed and process-cycle time is reduced. You’ll still need to spend time fixing the gaps in the process, but intelligent automation can integrate data extremely well from disparate data systems.

RPA for financial management (procurement-to-payment)

Financial processes are riddled with searching, transferring, sweeping, copying, pasting, sorting, and filtering. Financial-process automation will improve relationships with your suppliers and internal partners as well as improve efficiencies within the finance department. RPA can be used to validate contract terms against invoices and validate that standard data such as address and billing information wasn’t changed in recent invoices.

While the most obvious benefits are around financial risk and controls, cutting down on manual processes often presents even more positive organizational impacts. Freeing up overworked and overcapacity financial leaders can improve morale. By shifting resources from mundane, tactical activities to strategic, high-value-added activities like performing analysis and predictive modeling, RPA can become a force multiplier for financial-management teams.  Let’s look at a few of the many use cases for RPA for financial management:

  • Supporting the quarterly close
  • Calculating and anticipating accruals based on real, invoiced (and what’s not invoiced) data
  • Moving data from Excel to readable reports
  • Uploading transaction data from various financial systems
  • Generating standard journal entries
  • Identifying atypical and exception spending
  • Calculating and processing annual vendor rebates
  • Communicating to vendors missing or late invoices
  • Tracking vendor adherence to billing policies and best practices
  • Autoloading quarterly forecasts into the financial system of record
  • Reconciling forecast to actuals for departmental category spend
  • Monitoring CapEx and OpEx forecast to actuals variance

Operational financial and accounting processes are great examples of where RPA can shine. These processes often are repetitive and typically result in human error of some kind. Financial review prep, interdepartmental reconciliation, and financial planning and analysis all present opportunities for automation.

RPA for security

Protecting an organization from cyberattacks is a 24/7 job. The problem is that humans need sleep. RPA can humanize the role of the CISO and, almost more importantly, the role of cybersecurity managers. By leveraging RPA, we allow the function of the CISO to “get to human” by being visible in projects, developing organizational relationships, and inspiring new leadership. This isn’t possible when resources are consumed by cyber-threat prevention and mitigation at all hours of the night. RPA can strengthen and simplify your security operations in multiple ways:

  • Deploy security orchestration, automation, and response to improve security management
  • Shut down unauthorized privileged access
  • Robotic security and password configurations are encrypted and can’t be accessed by company personnel
  • Identify and prevent zero-day attacks
  • Cyber antispam (non-threat driven spam)
  • Cyber-threat identification, bot creation, and threat cleansing
  • Filter out false-positive threats
  • Issue consistent credentials enterprise-wide
  • Automate password rotation
  • Review 100% of access violations in near real-time
  • Improve security and auditing of data
  • Implement intelligent automation using artificial intelligence; e.g., creating tickets in ServiceNow based on threat analysis and immediately shutting down that risk
  • Identify atypical user and machine actions based on behavioral analysis
  • Lower the cost to detect and respond to breaches
  • Rapidly detect, analyze, and defend against cyber attacks
  • Identify behaviors that are unlikely to represent human actions

AI-enabled cybersecurity is increasingly necessary. The volumes of end-point data are exploding, and our budgets are not. Organizations need to turn to RPA as threats overwhelm security analysts. Detection, prediction, and response can all benefit from applying RPA to transform your organization’s cyber defense.

Intelligent automation shares the workload

Becoming more strategic starts when you stop spending all your time on tactical activities. That’s a difficult process when these tactical activities are required to keep our businesses running day-to-day.

Step beyond simply establishing an RPA center of excellence (COE). First, find the early adopters (the believers). These are the folks that push normal off the table and continually challenge what worked okay yesterday. Second, be collaborative. Seek out cross-functional leaders that you can educate to be champions in the pilot. Third, identify a problem. Focus in on a specific business challenge and articulate a business case where RPA is fit-for-purpose. Quickly move from a proof-of-concept (POC) that takes 2-3 weeks to a proof-of-value (POV) that takes 6 weeks.

Blockchain, cognitive analytics, augmented reality, and robotics all present huge and largely untapped opportunities for organizations. Your business model is changing. Be part of the change. Ideate together. Adopt quickly. Embrace the total value of ownership and apply RPA to accelerate business value.

How analytics help justify the role of the CIO

CIOs require unified intelligence for data-driven insights leading to actionable organizational decision making.

Executive attitudes, application portfolios, and suppliers each affect the perception of the value that the CIO brings to the table.

Initially, the CIO was a functional unit head, making promises of rapid delivery. Next, the role progressed to the CIO as a strategic partner that enabled business partner convergence. Subsequently, it advanced into a business visionary role that drove strategy. Today, the role of the CIO is centered around business transformation and building a data-driven culture.

The CIO is a change agent. All the IT functions that support business operations must work smoothly, or not much transformational change can be realized.

To change much, you need information and intelligence. Here’s how we CIOs get that intel.

A long way from there

Our role as the VP of IT in the mid-1980s has come a long way. The role was focused on IT infrastructure and getting technology deployed into the business. Then we had the international push in the late 1990s, where our global business knowledge regarding expansion strategies was tested. We started decentralizing and introducing business-process standardization. The technology was aligned with business models.

By the 2000s, CIOs were technology integrators. We had global system integrations, and it was vital that technology integrated with business solutions horizontally across our organizations. As we moved into the 2010s, we shifted to designing the architecture of business. Relationship orchestration and solution integration dominated most meeting discussions. Our focus was getting technology to provide on-demand business services. As we open the door to the 2020s, the role of the CIO is all of those responsibilities and more. Today, we’re conductors of value and leaders of transformational change.

Before you, as a CIO, can change much, you need intel—insights, analytics, and predictions of what will be and a reminder of what was. How do we obtain those insights to change the course of organizational analytics?

A simple and powerful method to gain actionable intel is an organizational analytics dashboard. Why? It provides self-service access to real-time information on the health and well-being of your organization.

Separate the data from the visualization of that data. This could be presented from the same system or not. It’s less important that they’re the same and more important that data presented as information is clean and actionable. From my experience, often the entry of data and the visualization of data require different systems.

Here are the six essential components of an actionable CIO dashboard:

1. Business capabilities

Business capabilities articulate how an organization plans to integrate and standardize to enable the organizational business model.

By aligning all projects, initiatives, and work to a business capability, it becomes clear how that element further enables the organizational goals and, more specifically, the business strategy.

There are three general views that help illustrate business capabilities:

First, Gartner’s Run, Grow, and Transform model quickly helps to identify if you’re maintaining core systems; e.g., just keeping the lights on or truly transforming the business model of your organization. This model can be useful to build the classic images of technical debt increasing over time and spend on innovation decreasing. This is visualized with the typical chevrons left to right with dollars of spend listed annually by category.

Second, the Center for Information Systems Research (CISR) out of MIT looks at four different investment classes: strategic, informational, transactional, and infrastructure. This is portrayed in the visual form of a triangle. Strategic focuses on innovation, major change, facilitation, high-value adds, and deep customer interaction. Informational concentrates on increased control, better information, better integration, improved quality, and overall faster cycle times. Transactional is simply cost out and increased throughput. Infrastructure is foundational and covers business integration, business flexibility, reduced marginal cost of IT business units, reduced business cost and IT standardization.

Third, we have the classes of business capabilities and enabling technical capabilities. Business capabilities highlight major functions that enable the organizational mission and advance initiatives e.g., in healthcare, this could be claims processing or revenue cycle. Technical capabilities enable those business capabilities, i.e., analytics or cloud computing.

These three views provide CIOs actionable intel on organizational initiatives and their linkage to the business strategy.

Questions this component of the dashboard can answer:

  1. Is the technical debt impacting our ability to innovate?
  2. Do we have the right balance between foundational and transformational initiatives?
  3. Which are the top business capabilities we’re accelerating this year?

2. Balanced scorecard

This is a concept we’ve all heard about, but no one has seen. Well, maybe we’ve seen it, but it’s more like a sighting of a Dodo bird—more myth than reality.

Regardless, the balanced scorecard concept is sound. The idea is that a balanced scorecard connects the business strategy to key elements such as mission (our purpose), vision (our aspirations), core values (our beliefs), strategic focus (our goals), objectives (our activities), measures (our KPIs or strategic performance), targets (our desired performance), and initiatives (our projects).

This connection between strategic objectives, high-level strategy, measures, and strategic initiatives provides insights into how effectively we’re executing on the organizational mission.

Key areas to cover in a scorecard that focuses on portfolio delivery might include the following:

  • Forecast vs. budget variance
  • Schedule confidence for active vs. committed projects
  • Data validation issues per active project
  • Statements of work in progress for active projects
  • Average project size per project manager or business relationship manager (BRM)
  • Average spend per project (for the current year)

Together, these elements provide you, as CIO, with information to better guide the organization.

Questions this view can answer:

  1. How accurate is our ability to forecast spend?
  2. Are projects rolled up into programs for maximum cost and management efficiency?
  3. How confident are we that we’ll meet our business commitments (risk in delivery)?

3. Executive portfolio summary

The executive portfolio summary is the heart of portfolio management—how we manage strategic organizational investments.

Generally, there are three components to portfolio management: application portfolio, infrastructure portfolio, and project portfolio. For simplicity, we’ll wrap these into a single and unified view in our dashboard.

There’s an endless amount of information that can be included in this view. However, these are some of the most important elements:

  • Projects – active workstreams
  • Project managers – who’re leading what?
  • Project health – cost, schedule, scope, benefits, and quality
  • Value – are we improving outcomes quantifiably?

Usually, this component of the dashboard includes a lower, more detailed view with specific project information including business case, sponsor, status, budget, weekly status comments, etc.

Questions this view can answer:

  1. Are we spending wisely?
  2. Are projects moving through the pipeline smoothly, or do we have bottlenecks?
  3. Are we on target to deliver initiatives for the current year?

This dashboard component is often the most referenced view by leaders and team members alike.

4. Resource management and capability planning

The objective of resource management is to improve the on-time execution of initiatives, view resource availability, and analyze utilization.

The capability planning of resources isn’t only about accurately forecasting true resource demand and accounting for future resource needs. Oh sure, that’s part of it, but we’re talking about building the credibility of IT: Say what you’ll do, and do what you say. It’s expectation management. Taking the time to develop a resource-management and capability program that can be visualized as a dashboard for insights and action is one of the more significant business transformations a CIO can make.

There’s a lot to present, but let’s highlight the most useful areas:

  • Allocations and stage
  • Allocations and status
  • Named resource utilization
  • Allocations by project
  • Allocations by resource role
  • Allocations by resource team
  • View by business relationship manager (BRM)
  • View by functional manager
  • View by project manager
  • Dimensional pivots by committed work, year, project name, resource, cost, or utilization percentage.

This information empowers your organizational leaders and enables intelligence decisions. It also helps to shift into a data-driven mindset and ensure information floating up is accurate and grounded by hard data.

Questions this view can answer:

  1. Do we have a justification for headcount increases?
  2. Are organizational roles continually overallocated?
  3. Which functional managers require assistance in managing their business demands?

As a CIO, I find the resource view one of the most powerful. We know we’re only as good as our worst organizational link. This view helps us identify where we have weak links in the chain.

5. Financials

Financial management is integral to responsibly managing portfolio investments. Financial planning is often a quarterly cycle that’s all-consuming for an organization. Project managers are scrambling to true-up project forecasts. Functional managers are balancing demand with capacity. Leadership is anticipating delivery roadblocks.

Financials are a lagging indicator, but they still offer huge insights for a CIO. If financial spend is trailing forecast, it’s a good indicator that we have influences pushing against our ability to deliver organizationally. These influences could be political, environmental, social, technological, economic or legal.

When presenting financials, less is more. However, we do need the ability to drill down deep. The Six Sigma 5-Whys come to mind here. Why is the variance off? Why is that project the major contributor to our variance? Why is that vendor not responsive? Why are we paying when we’re not receiving delivery? Why do we have a contract that doesn’t tie deliverables to payments?

Here are key areas of concern for visualizations:

  • Budget
  • Forecast
  • Actuals and Accruals
  • Variance and absolute variance
  • Variance by project
  • Variance by project manager
  • Variance by functional manager
  • Spend by vendor
  • Projects by vendors impacted
  • Vendors with projects impacted
  • Contracts by status
  • Contracts by vendor
  • Contracts by functional manager
  • Contracts by business unit by vendor

Accurate financials can be a game-changer when it comes to managing business expectations. Financial accuracy is a discipline and a mindset that needs to be cultivated and rewarded internally.

Questions this view can answer:

  1. Which projects have a greater financial risk?
  2. Do we have contracts about to expire?
  3. Has our CapEx-to-OpEx ratio changed from our initial estimates?

Solid financial management is required to scale or grow any business. A variance of 5% on $50 million is $2.5 million. However, a variance of 5% on $5 billion is $250 million. The cost of lost opportunity could be the innovation game-changer your business partners are looking for. Take the time to put the people in place, establish the processes, and invest in technology to improve financial accuracy.

6. Investments by area (total cost of ownership)

“What have you done for me lately?” Investments by area help to immediately quiet those business partners that feel they never get a fair shake when it comes to funding. Who was actually allocated what dollars has no relevance in their reality. These difficult business partners find a way to consume time that would be better spent on other areas. In their defense, they’re often the most steadfast defenders of value for their business. We, as CIOs, need to meet them halfway.

The investment-by-area view tracks financial performance for their business unit. Project-cost accounting tracks financial performance of resources (employees, non-staff augmented resources and vendors), software, hardware, projects, licensing, infrastructure, vendors, run (keeping the lights on), product development (small enhancements), equipment, professional services, and more. Everything the business consumes financially is tracked in investments by area view.

Ideally, this is project-cost accounting for each business unit rolled up to its functional head or executive leader. Not all organizations are ready for project-cost accounting. In that case, a “show back” approach is useful and serves as a step in the direction of adopting project-cost accounting. This approach shows business partners their investments by area. Investments by area sound more strategic—like we’re investing in innovation—whereas total cost of ownership sounds like we’re paying off debt.

Questions this view can answer:

  1. How much value is IT providing to the business?
  2. What’s the financial commitment from IT to our business partners?
  3. Is spending on each category reasonable based on our strategy?

This approach should illustrate the end-to-end cost. This includes the cost from inception (plan, build, run) to destruction (sunsetting the technology). This complete, multiple-year view helps business partners to understand how their decisions have a direct impact on committed spend in future years.

Is your role secure? Do you feel you have organizational and business-partner support to drive your agenda and enable the strategic mission? You might. You might not. Establishing a CIO dashboard can provide hyper transparency in your IT’s ability to meet expectations. A CIO dashboard offers insightful and actionable intelligence that’s vital to justify the role of the CIO. Are you justifying your role?

First, define “good” to scale your organization

Reposition your organization for success by connecting your vision to behaviors for organizational growth.

Next year will be here soon. As the new year begins, your leadership will be shouldered with the same challenges. Team gaps that slowed progress this year will still be present unless the team structure, department, and organization change.

Have you recently been asked to step in and help transform an underperforming team? What’s the first thing you do? Maybe you quickly identify low-hanging fruit (immediate gaps) that could be solved in under 30 days. Next, you might explore tactical areas (short-term wins) that could be addressed in 30-90 days. Finally, you may look at strategic areas that would take more than 90 days to solve (game changers). There’s just one problem. You’re taking the same approach and applying the same tools as your predecessor.

Stop asking your team what activities they’re doing. That approach doesn’t work. What does work is defining “good” up front.

Use influence to define “good”

Constructed for the 1889 World’s Fair, the Eiffel Tower today remains one of the most-visited monuments in the world that visitors have to pay to see. The tower spans 324 meters—about the height of an 81-story building. Maurice Koechlin and Émile Nouguier were co-designers of the Eiffel Tower and published the first designs of the tower in 1884. However, credit for the tower’s final design and engineering is usually attributed to Gustave Eiffel.

Eiffel had a vision of what good looked like, which was based on the influence of existing architectural excellence. Eiffel could have modeled the tower’s structure to be similar to the copper Statue of Liberty designed by French sculptor Frédéric Auguste Bartholdi. His vision could have mirrored the Washington Monument, designed originally by the architect Robert Mills and constructed of marble, granite, and bluestone gneiss. The Washington Monument was also the tallest structure in the world at the time. Eiffel might have leveraged the design of the Cologne Cathedral as a model of architectural brilliance, illuminating the original Medieval plan. The cathedral was completed in 1880 and is constructed of stone with wooden internal features.

Despite the more than 5,300 plans and drawings that had been created for the tower, Eiffel took an alternative approach. He first defined good and then took advantage of progressive materials available at the time—specifically, puddling iron, a precursor to construction steel.

Focus on the right stuff

You’re not alone in your struggle. Each day in the office, you’re putting out fires. Business relationship managers (BRMs) are fighting to move from tactical order takers to trusted strategic partners. You strive to have that strategic discussion, pontificating upon a mythical day in which there’s team stability and consistent value delivery nine months down the road.

Inevitably, the harsh reality of today’s organizational dysfunction brings you back to address recurring problems—problems thought to have been previously solved. Do any of these sound familiar?

  • The license for one of your development products expired.
  • A vendor was working at risk after a statement-of-work expired.
  • Resources had been working hard—just on the wrong stuff.
  • Work appeared to be happening, but the outcomes weren’t being realized.
  • The team size was perceived as bloated, yet delivery continued to waffle.
  • A hero complex was embraced when issues arose, and root-cause analysis was an afterthought.
  • Operational urgencies had bled out strategic energy.

Often, leaders fail to define good before taking action. When absorbing a new team that’s struggling to perform or recharging an existing team, don’t ask everyone on the team what activities they perform. At best, you’ll document the current state of activities that defined low performance; at worst, you’ll miss the real activities not being performed that directly contribute to the dysfunction of the team. You must first define good.

A blueprint for organizational redesign

Let’s take a lesson from Eiffel about his approach to building the Eiffel Tower. A similar approach can be used to shift organizations from low-performing to high-performing.

The Applied Convergence for Organizational Excellence approach (ACOE, pronounced as “ace”) is a technique I’ve modeled over the years. The process has 10 steps to help you define good for your organization or team.

  1. Apply frameworks for influence
  2. Elaborate specialized roles
  3. Align capabilities to roles
  4. Assess the capabilities
  5. Clarify the roles
  6. Define good
  7. Map individuals to roles
  8. Interview for role misalignment
  9. Reset job responsibilities
  10. Share the vision

You now have defined good.

Illuminating the journey to good

Apply frameworks for influence means basing your capabilities on one or more frameworks such as Control Objectives for Information and Related Technology (COBIT), Information Technology Infrastructure Library (ITIL), IT Service Management (ITSM), Skills Framework for the Information Age (SFIA), or IT Capability Maturity Framework (IT-CMF).

Elaborate specialized roles means you expand each capability or function into roles that would be applicable for the most mature and specialized organization.

Align capabilities to roles refers to defining twelve core roles and aligns all capabilities into one of these twelve roles.

Assess the capabilities involves reviewing each capability to determine the maturity level of that capability (0% non-existent, 25% developing, 50% defined, 75% managed, or 100% optimized).

Clarify the roles means capturing individual feedback and ensuring that resources participate and provide feedback on the roles.

Define good is a step that sets the stage for what good looks like in terms of organizational capabilities and roles.

Map individuals to roles refers to linking existing individuals to the newly defined roles.

Interview for role misalignment involves asking each individual in a role today to identify responsibilities they believe are within their job description. It doesn’t matter if the resource says, “yes, it’s in my role” or “no, that’s outside my responsibility.” Your goal is to identify the gaps not being addressed today; e.g., the gap between poor performance and your definition of good. A simple application of this step is to put the role responsibilities in a two-column, PowerPoint slide and then highlight any responsibility the resource feels is outside the scope of their work in blue. The responsibilities in blue highlight the organizational gaps by role to get to good.

Reset job responsibilities means creating a new job baseline for the resources and establishing clear ownership for all capabilities and functions.

Finally, Share the vision is a process that communicates the converged vision of the new capabilities and roles across your organization.

At this point, you’ve defined good. The current organization’s responsibilities are clear, and the gaps to advance the organization to good are identified by role.

Champagne at the top

It’s important to consider the influence of great organizations when designing organizational structures to foster high-performance teams. However, like Eiffel, be sure to apply new materials, ideas, tools, and techniques in your approach. The creators of existing organizations didn’t have the luxury of incorporating these new patterns and designs into their models. If they did, maybe they’d be on top of the Eiffel Tower, toasting at the champagne bar. It’s hard to beat a good glass of bubbly.

Your peers are going to suggest you survey individuals to understand organizational gaps by taking an inventory to capture all the aspects of poor performance. Think differently—play chess while your competitors are playing checkers—and begin by defining good.

Portfolio velocity: the new measure of business value realized

Welcome to the last time you report on the number of projects your team delivered.

Are you still talking about how many projects your team delivered last year? Please don’t. You’re misrepresenting what your team, department, and organization delivered. Stop talking about projects completed and start talking about portfolio velocity.

At the beginning of each year, CIOs need to commit to deliver work. This could refer to a group of critical projects, features, buckets of work, or even just finishing certain carryover projects. Each year, we make these commitments, and each year we tell ourselves there must be a better way.

The fallacy of using projects as a metric for capacity

How does your team estimate their capacity for the year? The most common measure is the simplest: the number of projects completed. On the surface, this sounds reasonable. We have critical projects. If those projects are completed, we declare victory. There’s just one minor problem: this method is flawed.

Organizational growth presents huge opportunities, and it carries equal challenges. Living and dying by the number of projects your department delivers doesn’t work. Here’s an example of why:

Year One went great. There were 100 projects completed. The team is perceived as really delivering. In Year Two, the team struggled to deliver 50 projects. And, by Year Three, the team delivered only 25 projects. What happened? The budgets for the projects were doubled each year. Staff was added. There’s no excuse. The peppering of reasons—ranging from too much work to not enough time—fall on deaf ears.

This isn’t the full story. We need to add context. When the team was delivering 100 projects, they weren’t complex, averaging less than two months with budgets of under $50K. Companies mature as they grow. Year Two required additional roles beyond the original technical lead and developer. An architect modeled the solution. The security lead addressed authentication and authorization. The quality lead tested for external interfaces, which didn’t exist in Year One. A project manager managed the vendor relationship and procurement process.

In Year Three, complexity, effort and the budget increased. Projects required more rigor to ensure success working across time zones, engaging third parties, and with more business and technology complexity. Projects involved multiple departments, which required shared services—e.g., legal, procurement, infrastructure, service desk, etc.—and demanded an average of four to nine months of effort to deliver. Projects also required more funding for delivery given the increased number of participants.

We know that going from 100 to 50 to 25 projects delivered per year isn’t a story we want to tell. Our team has worked harder each year. We’re building a team of more competent staff. The organization has provided greater funding each year. We need to rewrite this story.

Agile velocity

Velocity is a great tool to measure the performance of a team. A team’s velocity is the amount of work a team delivers over a fixed period. In Scrum, velocity is commonly measured in “points,” also known as story points. The points measure the effort for a given story, i.e. a specific piece of work.

Consider three factors when estimating points: complexity, effort, and doubt.

Using a scale of 1 to 5, 1 is the least complex and 5 is the most complex. What matters is the relative value of the points. We could have chosen 1,000 to 5,000 or 100,000 to 500,000. What matters is that a 2 is twice the value of a 1. Agile teams can have trouble with absolute values. The concept of triangulation helps ensure consistency. Periodically during the estimating process, compare a 1-point and a 2-point story. Discuss whether the 2-point story is twice as complicated.

The estimation fallacy

In his book, The Black Swan: The Impact of the Highly Improbable, Nassim Taleb describes several lessons that impact how we estimate:

  1. Epistemic arrogance: people claim they saw it coming, e.g., we estimate well in general except for ‘’
  2. Narrative fallacy: missing cause and effect, people bend stories to fit current understanding, e.g., this project should be like that one
  3. Asymmetry: an unequal upside and downside, e.g., missing by a factor of 2 is easy

Michael Bolton provides a great example of project estimation fallacies with Black Swan-like events. In his example, a project is estimated to take 100 hours with each task requiring 1 hour. He added in some basic assumptions:

  • 50 percent of tasks complete in 30 minutes, not 1 hour, i.e. the task is completed early
  • 35 percent of tasks complete on time
  • 15 percent of tasks experience bad luck.

Our 15 percent of “bad luck” tasks is broken out into the following assumptions:

  • Little slips: 8 tasks slip to 2 hours
  • Wasted mornings: 4 tasks slip to 4 hours
  • Lost days: 2 tasks (one in 50) slips to 8 hours
  • Black cygnets (baby black swans): 1 task slips to 16 hours, i.e. 1 in 100 tasks takes two days

Using this simple example, the probabilistically average project would come in 24 percent late. A project that was committed for Q4 would be delivered in Q1.

Introducing portfolio velocity

Our goal is to estimate the departmental capacity to deliver value. How do we measure this? Portfolio velocity.

Accuracy doesn’t matter, but consistency does. We can apply this new measure to estimate our departmental capacity. To calculate your portfolio velocity, perform the following steps:

  1. Select a project
  2. Estimate the complexity on a scale of 1 to 5, with 5 being the most complex
  3. Estimate the effort by estimating the number of months to complete the project ranging from 1 to 12
  4. Calculate the product by multiplying the complexity by the effort to determine the project velocity
  5. Sum the velocity for all projects to determine the portfolio velocity
  6. Calculate historical portfolio velocity for past years
  7. Determine reasonable growth rates on top of last year’s velocity
  8. Rank the projects in the portfolio 1 through n
  9. Draw an above and below the line based on your new forecasted velocity

By completing through step 5, you now have, in your hands, the forecasted demand. To determine if this velocity is reasonable, we need to conduct a brief exercise to discover the historical portfolio velocity by looking at prior years and conducting a similar exercise for projects from these years.

Let’s assume we determined that the velocity for Year One was 200 points, Year Two was 300 points, and Year Three is forecasted at 1200 points. Already our story is changing for the better. However, it’s clear our capacity isn’t going to increase four-fold.

Assuming moderate growth of 10 percent, we can forecast our Year Three velocity: 330 = 100 percent + (10 percent x 300). Using the ranked list, we can determine which projects will be delivered in Year Three and keep us at or slightly below our target portfolio velocity.

Estimation exploratory options

The 1-to-5 complexity scale works well. Alternatively, a base-2 sequence (0, 1, 2, 4, 8, 16, 32, 64, 128) or a Fibonacci-like sequence (0, 1, 2, 3, 5, 8, 13, 20, 40, 100) can help make the difference between projects more distinct.

However, humans are notoriously poor estimators, and I’ve found that teams more effectively estimate based on common orders of magnitude such as 1 to 5 or 1 to 10. It’s easier for a team to estimate the size of a mouse at 1 and an elephant at 5 versus a mouse at 1 and an elephant at 8 or a cargo ship at 64.

Measure what’s valuable

Too often, we get swept away by maintaining historical measures. Being accountable is good. Ensure you’re holding your team, department, and organization accountable for the right measures.

Make this year different—with hard commitments—by determining your portfolio velocity.