Spend more time playing and less time working with this productivity hack

Are you trying to figure out the next productivity hack? Interested in discovering a way you can improve? Curious how to accelerate how much time you have every day?

I want to share something that I’ve used for many years. This method is the main reason I’m so productive; why can I write more every day, create more content, and ultimately spend time doing the things I love like sailing, flying planes, and scuba diving.

Hi, I’m Peter Nichol, Data Science CIO. Today we’re going to talk about different approaches to improve your productivity. In the 1990s, Charles Schwab was trying to come up with ideas to help his management team. At the time, he was president of Bethlehem Steel out of Bethlehem, Pennsylvania, an extensive production facility producing steel for the military and many other operations.

He was trying to figure out that next productivity hack. When brainstorming, he decided to reach his old friend, John D. Rockefeller, Senior, and asked for suggestions. John referred him to a gentleman named Ivy Ledbetter Lee. Lee was known more as a public relations specialist. However, he also had built a reputation as a productivity expert. Charles asked Lee to come to the PA plant and work with his management team. Lee interviewed Schwab’s management team and only spent 15 minutes with each team member. Once Lee was done, he provided each executive with a single tool. Later, Schwab discovered that Lee introduced each team member into what later will be known as the Ivy Lee Method. This is an approach to keep hyper-focused on your top priorities.

The concept was beautifully simple, write down your top six priorities and do them every day. If you only could get six items done each day, what would they be? This latest dovetailed into Tim Ferris’s concept, asking, “What would you do at work if you only had two hours a day.”

Lee suggested each executive work their list top-down (1 through to 6) in order of priority. Completing the most important item first, then the next most important, and so on. Distractions are a reality in any environment. The higher your seniority the more frequent fire drills pop up that are unexpected and critical. If you did not complete your list of six items at the end of the day, those items roll forward onto your list for tomorrow. If you did complete all six things, you would create a new list of six things for the following day. This method keeps the focus on the items that add the most impact to your day.

The method allows leaders to focus on their work’s most critical aspects while spending less time on the least essential elements. We often apply the Pareto 80:20 rule, where 20% of the activities provide 80% of the value. Unfortunately, we often focus on that 80% because these are the low-hanging fruit items. They typically more tactical, and usually, they can be completed independently. As a result, this makes these tasks easy to complete. Yet, they don’t always provide the most value. This is where the Ivy Lee Method comes into play.

If you can apply this method to your daily activities, what you’ll find is you slowly increase productivity. Of course, dramatic transformations don’t occur overnight, but over months and years, it’s unbelievable how much more productive you’ll be. As you think about what you want to accomplish this week that will keep you on track for your quarterly goals, consider using the Ivy Lee Method. Begin with the top six most important items, and if you get those completed, great work on the following six.

Hi, I’m Peter Nichol, Data Science CIO. Have a great week!

Applying predictive analytics to lock in year-end success

Can you answer the following questions?

  1. Do you know your quarterly results?
  2. Do you know how your team is performing?
  3. Which aspects of your plan are working out great and which aren’t exactly running according to your goals? Is your team getting it done?
  4. How is financial performance trending for your organization or department?
  5. Are business issues being resolved, or is that debt growing?
  6. What does your year-end look like based on your performance?

I’m going to share some insights to help you answer these questions.

Hi, I’m Peter Nichol, Data Science CIO.

Today we’re going to talk about predictive analytics, but this might not be what you think it is. We’re not going to cover digital asset twins, where we try to understand digital sources’ physical properties. Operational digital twins are out of scope in this article, too. Operational digital twins have to do with digitizing the supply chain. Future digital twins are also not part of our discussion. We’re usually looking at real-time streaming information and using computational models to help understand neural networks.

What we are going to talk about is a simple, predictive-analytics example applied to day-to-day business operations. Let’s shift away from model discovery, asset management, and risk management into a practical design that works. We’ll explore a model I’ve been using for years that’s highly impactful. Here’s how it works.

Every day and every quarter, we try to predict future outcomes. We try to use predictive analytics to offer insights into future events based on previous events. We know our results from yesterday. We know our results today. In theory, can’t we predict our performance for tomorrow, next week, or next month? Yes, we can. That’s precisely what we’re going to do in this discussion.

Here’s the good news. You don’t need Minitab. You’re not going to need the IBM SPSS Statistics package or Rstudio, JMP, or even OriginPro. Excel will work just fine (although all those tools are compelling when applied to a suitable problem space).

This simple application is in the form of a side-by-side bar chart with a cumulative trend line. The first will model financial variance, and the second will be leveraged to predict trending rates and identify abnormalities.

Financial variance model

Our problem statement is that we’re attempting to model our future financial variance based on historical information. While this is far from a perfect model, the principle is that historical performance will be an indicator of future performance. Essentially, we want to answer this question: If we stay on the current path, what will be our financial variance at year-end?

We start with a model of our planned spend monthly and roll that up to our annual forecast or budget. Hopefully, this isn’t a flatline model—i.e., $1 million a month that’s flatlined—as this model assumes that every month is equivalent, which is rarely the case. We need to consider environmental variances, the cyclical nature of payment schedules, and other inherent business operations challenges. We need to look at each month separately. Once you have that number, it will be your forecasted spend.

Every month, the books will eventually close, and we’ll have actuals for that prior month—the actual spend. We now can chart these two values per month in a side-by-side bar chart to visually depict the variation.

On top of these bars, we’ll overlay a trend line representing the cumulative variance over time. For example, let’s say our January forecasted spend was $1MM and our actual spend was $800k; we have a variance of $200k that will go into the trend line. In February, if our forecast is $1.2MM and our actual spend is $1.0MM, we have another $200k variance. In this model, the February trend line would be $400k ($200k + 200k), and so on. This visualization immediately shows whether we’re heading toward our objective of low variance or if our variance is growing. This allows us, as an executive, to immediately take corrective action before the year-end numbers are locked down. Similarly, we can use this same model to chart variations and adherence and see how close we’ll be at the target end for multiple variables including defects, disputes, sprints, features, enhancements, product launches, etc.

Defect variance model

We can use our same model to chart variations and adherence for defects. The model is applied in this way: Our problem statement, in this case, is: How many defects will we have at year-end? We use a forecasted model to determine how many defects we think will be created per month. One bar chart will be defects created. We’ll also track how many actual defects were resolved each month. We’ll label these defects resolved. Similar to the model above, we’ll chart these two values monthly, side-by-side. Then we’ll use the cumulative delta—i.e., defects not closed or additional defects burned down over our target—as our trend line. This trend line now becomes our backlog.

Whether we’re accumulating 2,300 issues a month, 50 issues a month, or whatever it is, we can quickly identify a trend. It’s straightforward to see the trend because the trend line is either increasing or decreasing. The result is we’re able to perform and predict the difference in future outcomes. Are we heading in the right direction, or are we not?

By applying these simple techniques, you’ll have a powerful tool to anticipate the business and operational changes required to ensure that your department posts outstanding year-end results.

Hi, I’m Peter Nichol, Data Science CIO. Have a great week! Cheers.

Chart 1.0 Prediction Trend

Chart 2.0 Team Velocity

Chart 3.0 Open Issues by Age

Chart 4.0 Open Issues by Age with Priority

Breaking into data science

Are you an executive who’s trying to break into the data-science space? Are you a project or program manager running a large operation and attempting to figure out how to get more involved in data science? I’m going to provide some insights.

Hi, I’m Peter Nichol, Data Science CIO.

Today, we’re going to focus on what data science is, the typical areas of data science, and how to get involved in untapped opportunities. Let’s get into it.

What is data science? Data science utilizes the scientific method to better understand how to use algorithms and other inferences applied to structured and unstructured data to generate insights. We’re talking about data and trying to get insights from that data.

Conventionally, there are five significant areas in data science:

  1. Computer science
  2. Data technology
  3. Data visualization
  4. Statistics
  5. Mathematics

First, we have computer science, the study of computations and information. Second is data technology, which focuses on data that’s derived from machines and humans. Third, data visualization offers the graphical representation of data and other aspects to provide insights. Fourth is statistics, which is a summary of information that typically provides conclusions on data. Fifth, we have mathematics, comprising the study of logic and the reason behind a lot of data-based decisions.

You might be thinking, “I don’t really have a background in dimensionality reduction or P values,” or maybe you can’t recall Bayes’ theorem off the top of your head. Perhaps your sampling techniques aren’t on the bleeding edge of technology innovation. That’s okay. Much of data science has nothing to do with those aspects but is just as critical.

One area of data science that’s rarely discussed is vendor management. When I speak to many world leaders—from those pioneering logistics to those running biotechnology companies—there’s one underlying theme that’s always present: Nobody has the money to hire an army of folks to go out and generate all these new insights. Hiring data scientists and data analytics leaders is expensive. Many companies don’t even have a formal Chief Data Officer. In many cases, data-science resources—even if you do have the funding for them—are difficult to source and hire because they’re niche to specific domains.

Even knowledge-data resources on your teams today likely don’t have the deep expertise required to make sustainable data-analytics transformations. So, what happens? As executives, we have to engage other types of leaders and partner with other companies to improvise these internal capabilities. In short, we contract out capability needs, usually in the form of fixed-price contracts. This model introduces new skills where we have additional gaps.

We require leaders who understand vendor management. Specifically, we need resources with general-vendor and contract-management experience and well as expertise in contract terms, sourcing, and category management. We also need experience contracting and leading the procurement activities of data-specific initiatives. These types of questions might help to determine if you have that necessary experience:

  1. Do you have experience negotiating vendor contracts?
  2. Are you comfortable identifying players in a niche domain space; e.g., vendors that do data ingestion and data governance, vendors that integrate with Snowflake or other data-lake solutions, etc.?
  3. Can you facilitate discussions to help narrow down potential vendors based on limited requirements?
  4. Do you have experience with data-driven contracts?
  5. Have you contracted for cloud compute and cloud storage services provided by third-party vendors? Are you comfortable with the terms and conditions of these types of contracts?
  6. Are you familiar with cloud-based data-transfer costs and approaches to control or limit cost overruns?
  7. Have you previously drafted a contract for data services or data-enablement initiatives?

We require resources that can internalize and enable organizational capabilities around data science, data insights, or data analytics. If you have a background in vendor management and are curious about data science, explore the untapped area that many leaders don’t want to talk about—vendor management.

Resources that are able to join teams, quickly come up to speed, and build capabilities by leveraging third-party partners are priceless. You might even have experience working with consulting companies or data-specific vendors with deep expertise. This works too.

Typically, executives don’t have the luxury of fat budgets to bring in 20 resources permanently to address data demands. To fulfill the growing asks that are pushing against our data enablement teams, we need to expand our capabilities. Usually, this means creative contracting.

To meet our business partners’ enormous data demands, we need to get creative, strategize, and find data-literate partners. The best tip I can offer is that, if you’re interested in learning more about data science or data-science capabilities, focus on the business’s vendor-management side. This is a crucial shared priority for the CIO and the CFO. Data-vendor management is here to stay and has vast untapped potential.

Hi, I’m Peter Nichol, Data Science CIO. Have a great day!

Discover the untapped potential of DataOps

Is DataOps the same as DevOps? It’s not, and I’m going to explain why.

Hi, I’m Peter Nichol, Data Science CIO.

One of the challenges we see with development operations (“DevOps”) is that folks don’t understand what it means. The term is becoming more and more popular, so I thought I’d take a minute to explain it. DevOps is a lot different than DataOps. DataOps focuses on automation. It’s a process-oriented methodology that data-science teams use to simplify or streamline a data-science workflow, typically generating analytics. Where did the concept of DataOps originate?

It came from Edward Deming, an American engineer, statistician, professor, author, lecturer, and management consultant. He’s also referred to as “the father of quality.” In general, he focused on quality and production control and, more specifically, on the area of statistical process control. If we look at DataOps—specifically the role of DevOps—we start to understand what DataOps is all about.

Let’s break down what DataOps means. The foundation of DataOps has three main principles.

First, DevOps is focused on the delivery and development of some entity. Second, it’s based on agile—commonly applied to software development—and on the Theory of Constraints, which centers around removing obstacles and trying to take the shortest path to achieve an outcome. Third, DataOps is made up of lean manufacturing principles. Lean manufacturing emphasizes quality and production efficiency and uses statistical process control to monitor and control process variance or deviations.

Okay, now that you understand the foundation, where’s this going? When we start to think about value and how it comes into the picture, we can add a data factory or data-orchestration concept. The benefit of data operations is that it’s not only the outcome that we’re looking for—i.e., a validated analytical model with visuals—but we’re also making sure the data goes into the process correctly. We focus on the throughput as well—what’s happening with that data and information as they translate through the model.

Lastly, who’s using data operations? This is the big difference between DataOps and DevOps—the users. DevOps typically has engineering—software engineers that know multiple languages all dedicated to streamlining the orchestration of that technical delivery. In DataOps, we have other roles. We now add data scientists and typically data analysts that don’t care about all the technical-orchestration details. They want to make sure that their models can be simulated, executed, and that the data or the outcomes—and, ultimately, the insights—are usable and can be transformed into some benefit for their business users.

As you think about DataOps versus DevOps, reflect for a minute. There might be an opportunity in your environment to emphasize DataOps to orchestrate and streamline the workflow of how your team generates quality analytics for your business partners.

Hi, I’m Peter Nichol, Data Science CIO. Have a great day!

Citizen development offers solutions for data analytics resource constraints

What if I said you could add 15 developers to your team, and it wouldn’t cost you anything?

Hi, I’m Peter Nichol, Data Science CIO.

Today we’re going to talk about citizen development. Most of our days are spent trying to put out fires and rationalizing rogue applications developed outside of standard IT governance and protocols.

As fun as that can be, what if there was a way that we could enable our users to drive data-science initiatives? Historically, we talked about power users or evangelists when resources were low and business partners demanded that they develop business-intelligence reports. Our goal at the time was to enable our business partners to be more effective and self-sufficient.

The idea behind citizen development is to provide your business partners with capabilities to function independently and in parallel with IT. The theory is to introduce business users to low-code or no-code environments. Through this process, we can accelerate development and lower the barrier to entry to develop data insights. As it turns out, 25% of citizen developers have no coding background at all. Typically, almost 70% are able to build applications that are functional in less than three months. Not bad for nearly zero coding experience.

What does this mean to you? You already have resources at your disposal in your company that are idle. These resources are dormant; they need to be discovered and activated. Using this approach, if we can inspire and activate those individuals, we increase our resource capacity. We now have additional resources to which we previously didn’t have access to leverage, and the cost is nearly zero dollars to do this—the kicker.

The theory is that by focusing on different types of IDEs or development environments, we can enable these resources to do their development and build their applications. IDE environments like BlueJ, Eclipse, AWS Cloud 9, Code Blocks, and several other tools exist to provide business partners with capabilities and functionalities to develop their applications seamlessly.

It sounds like business users can now create anything they want. They can—with only one minor exception. Once those applications are developed, they must be run through a conventional IT governance process for compliance, security, and enterprise-standards adherence. Instead of business partners bringing in their own IT teams to do things better, faster, and cheaper, they converge with internal IT teams. Business partners no longer work on rogue IT initiatives that were developed in isolation. We’re collaborating as one team. We’re starting to converge.

The added benefit for business partners is getting trained on IT’s obstacles while working hard to deliver our organization’s functionality. Convergence is the whole point of citizen development. It allows our business leaders to experiment and accelerate as fast as possible on their own with guard rails. This provides different value opportunities and inspires leaders to develop and build functionality to support our business operations.

Does your department have a backlog of issues that have been requested over the last, say, six or nine months? Your team has a limited number of developers, data analysts, data scientists, and capable programmers. Your capacity isn’t infinite, yet the demand for new requests seems almost endless. By leveraging citizen development, we now have tools and techniques to help burn down that backlog and converge as one unified team.

Hi, I’m Peter Nichol, Data Science CIO. Have a great day!

Why team norms transform team dynamics

Do you know any leaders who accepted new opportunities, changed jobs, or joined as a new organization leader this year? Was that leader you? Great. I have some excellent tips for you.

Hi, I’m Peter Nichol, Data Science CIO.

Today we’re going to talk about team norms. The idea behind team norms is that every team has informal and formal ways to integrate, collaborate, and support behaviors that are encouraged by that team. The concept of team norms is to start with principles that the team generates themselves explicitly. This defines the behaviors the team promotes. By doing this, it helps focus the team on cooperative or accepted group behaviors.

Sure, as the leading executive, you can make a demand and say, “This is what we’re going to be doing.” Or, you can announce that these are the ideals, mantra, or ideology that the team will perform. But this approach doesn’t build joint ownership of any of those behaviors you just communicated. If you want to have co-ownership, you need collaboration in the development of those norms. Ultimately, it would help if you had input from your team and the leaders who’ll perform these behaviors and maintain these norms day-to-day. So how does this work?

Typically, we start with a roundtable session in which you discuss exemplary behavior. In a roundtable fashion, the group slowly develops a list of norms they’ll accept. By doing this, we establish how our team will perform and interact. This defines the dynamics of that team.

Starting in an organization as a new leader, you have an opportunity to change team interactions. Some of these interactions may be positive, and some may be negative. What’s relevant here is that we’re redefining how we want this team to interact for optimal performance. There are, of course, certain situations in which joining an existing high-performing team and offering up new norms doesn’t add a lot of utility.

However, when you pick a new team internally or join a new team externally, defining team norms is an excellent way to unify and establish a team performance baseline. Let’s discuss some examples of team norms. Examples of norms might be, “Treat others fairly,” or, “Be present,” or, “Be here now.” These norms are focused on engagement. If the team is in a meeting, electronic devices are put down, and the team stays focused. Suppose someone does need to respond to an email or check that voicemail. Not a problem—they step out of the meeting and check that email or take that call. When that individual can fully engage and give other team members the respect they deserve, they rejoin the discussion physically and mentally.

Another norm might be, “Respond to email promptly.” What does this mean? Is that 24 hours? Is it 72 hours? We need to define those norms clearly. What if you receive a voicemail? Should you pick that up in 24 hours, or is a couple of days, okay? Each team norm defines standards for baseline performance.

Team norms help the team understand what’s good performance and what’s not good performance. They also allow the team to challenge the leader or executive in a healthy way. Another norm might be, “You’re always able to challenge decisions respectfully.” So yes, we want a different and alternative opinion. Every opinion won’t be accepted all the time, but we want a complete set of data when making difficult executive decisions.

Here are examples of team norms I find useful:

  • Listen to understand.
  • Trust is everything.
  • Foster relationships.
  • Practice being open-minded.
  • Give colleagues the benefit of the doubt.
  • Respect the time and convenience of others.
  • Treat each other with dignity and respect.
  • Avoid hidden agendas.
  • Support each other; don’t throw each other under the bus.
  • It’s okay to ask for help; in fact, it’s encouraged.
  • Have direct conversations.
  • Be self-aware.
  • Celebrate accomplishments.
  • Raise each other up.
  • Think in terms of scale.
  • Take ownership.
  • Limit technology during meetings.
  • If you join, engage.
  • Don’t be afraid to challenge others respectfully.
  • One person talks at a time—no side discussions.
  • The goal isn’t to agree, it’s to build a group that has a diversity of experience; listen to alternatives.
  • If you’ll be absent, identify coverage (if possible).
  • Respond to email within 48 hours.
  • Respond to v-mail with 24 hours.

Establishing team norms can provide significant benefits to team dynamics. Hi, I’m Peter Nichol, Data Science CIO. Have a great day!

Biotech taking ownership of the whole supply chain

Usually, there is a lot of energy and money dedicated to speed to market, service effectiveness, and driving new business models. Typically, this is represented as breaking down silos, which might include:

  • Demand forecasting
  • Supply planning
  • Production manufacturing
  • Logistics planning
  • Sales and operations

We’ll start with demand forecasting in a traditional supply chain. However, we could just as easily be talking about biotechnology manufacturing or hospital demand of vaccines at hand at any given location. Typically, we extrapolate demand. Then we observe the relationships between independent and dependent variances such as previous sales and future demand. Next, we apply internal data, past trends, and maybe customer signals (inputs). There is just one problem. COVID came and crippled this conventional demand-forecasting process.

Today, these areas are becoming more efficient by using sophisticated analytical approaches, with collaborations across multiple functions, channels, and suppliers. But guess what? It’s still not working. Healthcare and biotechnology companies required a new strategy. First, they tried the more traditional approaches to maximize value, including:

  • Focusing on employee safety and operational continuity
  • Improving productivity and performance management
  • Increasing asset utilization and efficiency (whether it’s beds turnover or sterilizing medical equipment and instruments)
  • Improving quality in day-to-day operations
  • Optimizing workforce management

These approaches saw some thin benefits but not the 10x returns that were expected. Then they took a page from the Amazon and Apple playbook. Amazon has methodically taken control of its whole supply chain.

  • Consumer side: amazon.com, amazon fresh, kindle, whole foods
  • Seller side: amazon.com marketplace, amazon shipping
  • Enterprise side: amazon web services, EC2 (processing), amazon cloud drive, AWS marketplace, S3 (storage)
  • Entrepreneur side: amazon publishing, amazon flex (use your vehicle to deliver packages), amazon studios (new film series)
  • Amazon prime air: drones deliver 5-30lbs in 30min or less. On January 6, Amazon bought another 11 Boeing 767-300 planes
  • Amazon’s transportation fleet: now over 30,000 amazon branded vehicles and another 20,000 branded trailers

Amazon wants to own it all. The ability to control demand makes you curious, but the desire to manage your supply chain is more interesting. This concept also played out with Apple. Apple recently dropped Imagination Technologies. Previously, imagination was responsible for designing graphics processing units, the type used in Apple in products like the Apple iPhone. Apple plans to create its internal capabilities within the next 15months.

Biotechnology companies started to pay attention and have been leveraging this to control their supply chains, concentrating on production manufacturing for scale.

  • Pfizer and its German partner BioNTech, made 120 million doses in the U.S. within the first quarter
  • Aurinia Pharmaceuticals and Lonza have expanded their exclusive manufacturing relationship.
  • L’Oréal signed a leasing deal with Micreo, a biotech firm specializing in bacteria
  • Estée Lauder Companies announced a collaboration with biotech firm Atropos Therapeutics to explore lab-made ingredients

These examples highlight strategy movement to take ownership of traditional supply chains that were solely supplier operated and controlled.

Biotech investments point to the future

A great way to better understand innovation in healthcare and life sciences is to follow the money. Where are the venture capital deals? Where did top indications secure Series A funding? Who are the moves in these deals? We’ll focus on  2019 for data consistency; let’s get into it!

Where the top mind believe value lives (investments):

  1. Biopharma $15.6B
  2. Health tech $7.5B
  3. Dx Tools (medical diagnostics), which combine next-generation sequencing and artificial intelligence and computational resources – $4.4B
  4. Medical Devices $4.9B

What are the top indications to be funded?

  1. Oncology
  2. Platform
  3. Neurology
  4. Orphan/rare disease
  5. Anti-infective: to prevent or treat infections; they include antibacterials, antivirals, antifungals, and antiparasitic medications

In biopharma, who’s the most active (U.S. and Europe, 2018-2019)?

  • Alexandria, 42 deals
  • G.V. Corporate, 17 deals
  • Novartis, 16 deals
  • Pfizer, 13 deals
  • J&J, 12 deals
  • AbbVie, 9 deals
  • Merck, 7 deals

Which indication areas had the highest valuations?

  1. Cardiovascular
  2. Orthopedic
  3. Imaging
  4. Neurology
  5. Ophthalmology

Who’s the most active corporate healthcare investor?

  • BlueCross BlueShield
  • Alphabet
  • Echo
  • Philips
  • Merck
  • Tempus

We also saw a lot of movement in Series A deals. I enjoy looking at these acquisitions and takeovers targets to understand better how these innovative companies fit view themselves in the bigger picture. Where are they investing, and likewise, where are they divesting? Here are several of the movers the caught my eye.

  • AlizePharama, France, $82M in funding, specializing in the development of innovative biopharmaceutical drugs, proteins, and peptides, for the treatment of metabolic diseases and cancer
  • Anthos Therapeutics, Cambridge, MA, stage II trials for some innovative therapies for high-risk cardiovascular patients
  • ArsenalBio, France, cell therapy, picked up $85M in round 1. Interestingly, even the University of California is a named investor in their Series A funding round.
  • Elevatebio, Mass, Cell and gene therapy, two years old and just raised $170M. They focus on the development and manufacturing of a specific type of therapeutic approach.
  • Talaris therapeutics, Louisville, KY and Boston, MA, IN 2019 they picked up $100M, then another $115M in Oct of 2020, crossing into phase III trials for crossed the Phase III with its lead drug, FCR001 –to prevent rejection of the donated organ by reducing infections
  • Passagebio, Philadelphia, PA, focusing on genetic medicines for rare monogenic central nervous system disorders. They picked up more than $200M in 2019 and recently went IPO, now with a market cap of $1.15B.
  • Mirum Pharmaceuticals, Foster City, CA, to developing drugs for rare liver diseases. Founded in 2018, now listed on Nasdaq with 604M in market cap.

Digital twins land on your doorstep

Patient value, intelligent systems, empowered workforces, and digital twins are starting to drag energy from the early adopters in life sciences.

Patient value is being pushed for healthcare outcomes and better patient experiences. Intelligent systems leverage context, real-time and secure patient data moving from systems of records to systems of insights and engagement. The empowered workforce is using digital experiences to drive culture and employee and patient behavior changes.

The most interesting is the digital twins. A digital twin is a concept first coined by Dr. Michael Grieves in 2002. Interestingly enough, NASA first used the concept in space exploration. It’s the first concept to connect the physical and digital worlds.
A digital twin is the generation of digital data to represent a physical object. It sounds very 2030; it’s not. Allow me to explain; the concept, while not named, has been around for a while.

  • Construction: used to model bridge structures and conduct force modeling
  • Energy: leverages this technology to simulate wind conditions for wind turbines
  • Offshore rigging: combines models with wave energy to model wave energy harvester system platforms to model motion of floats
  • Manufacturing: has been applying automotive car simulations and transforming them from models into real cars for what seems like forever

It’s only recently that healthcare can finally get on the wagon and start to apply this technology.

  1. Representing the action of a therapy
  2. Modeling longitudinal biomarkers
  3. Patient-specific behavior predictions
  4. Simulation of optimization drug-dosing regimens
  5. Replacement of bench and animal tests
  6. Optimize product drug design, manufacturing, and even packaging
  7. Population-specific predictions of cardiotoxicity for arrhythmias, anticoagulant, and heart failure medications
  8. Novel drug delivery mechanisms for biologics
  9. Medical device designs, e.g., minimally invasive heart valves
  10. Precision electrical neuro-stimulation therapies

The challenge is that biotechnology companies are often limited to only a simple 1D model’s cellular level. This can make translating these principles into a human body difficult.

Combining digital twins and prediction models has high-potential implications for patients. Predicting the outcome of prescription drugs by applying machine learning to model lifestyle choices makes patient decisions for medication adherence and choosing a healthy lifestyle very obvious. It’s not just the technologies that are ready for this innovation; patients are ready too. Let’s move away from this one-size-fits-all dosing approach and pivot into a new paradigm of stratifying patients into groups with similar genetic, environmental, and physiological factors. Using digital twins, we can get there.

The U.S. stimulus package, under the Health Information Technology for Economic and Clinical Health Act (the HITECH Act), has earmarked US$1.2 billion for the development of healthcare technology across the U.S.
Maybe digital twins should only live within the manufacturing world? Or perhaps there are just a few of us believers that believe innovation can spans industries.

Driving innovation for patients with medical research

Medical research is pushing new ideas fast into healthcare.

These ideas need a landing place to be discussed, designed, modeled, and experimented with before reaching patients. This is where innovation labs start to become significant. There are hundreds of outstanding examples of world-class incubators dedicated to enabling patients. I pulled together some of the most exciting innovation labs on the fridge connecting medical research with patient outcomes.

  • A.I. Innovation Lab: Novartis and Microsoft are combining data sets and A.I. solutions. They tackle the most demanding computational challenges in life sciences and use machine learning for personalized treatments for patients with related macular degeneration (eye disease) to prevent irreversible vision loss.
  • Health2047:  this is the innovation arm of the American Medical Association (AMA). They focused on chronic care, healthcare value, and physician productivity. Two impressive projects are the First Mile Care and Akiri. First Mile care – personalized support for prediabetes, offering access to coaching, tools, and resources to live healthier. Akiri – network as a service health data platform (previous was called health2047 Switch), focuses on safe access to information for patients, physicians, providers, pharma, and other healthcare enterprises.
  • VSP Global Innovation Center: recently founded in 2020, is dedicated to getting eye care and eyewear to the world. They promise to deliver eye care for their members. A member that has reached nearly 90 million in their global network. The innovation center has delivered two fascinating projects, blue light, and sustainability effort. VSP blue light-reducing eyewear to address digital eye strain and spearheading sustainability efforts to bring new materials to the market.
  • GSK Warren Innovation Lab Suites and Digitisation Lab  – GSK, based out of the U.K. with innovation multi-labs. The Warren innovation lab is located in New Jersey and has consumer panelists testing products and performing shopper science. Open Lab is based in Tres Contos, Spain, and brings together scientists and academics to tackle topics like malaria and T.B. leveraging GSK’s resources and networks. At the GSK Stevenage Bioscience Catalyst campus, they offer space and partner with early-stage biotech startups to commercialize their technology. Also, they have an R&D Open Lab for African NCDs (non-communicable diseases) like cancer and diabetes in Africa
  • Center for Innovation (CFI): located in the Mayo Clinic, started in 2008. I love their tagline, “Think big, start small, move fast.” The Innovation Exchange is a Mayo Clinic platform for members to access scientists, facilities, and their network focusing on greater patient access.
  • FUSE: is led by Cardinal health and integrates engineers with medical professionals and digital design experts. MedSync Advantage is a web-based in-house developed tool focused on pharmacists for medical synchronization to simplify patient prescription fulfillment.
  • The Innovation lab: is the child of AARP with a passion for health, wealth, and self.  The Embleema platform allows rare disease patients to own and share their data. Ianacare, based in Boston, is a platform to enable caregiver coordination. Community Connections launched in March of 2020 addresses social isolation facing seniors amid the COVID-19 pandemic. This powerful platform helps community members receive emotional, financial, and physical support.

After reading all the fantastic innovation lab stories and learning of the new partnerships happening today, I came to a single determination; we still care about the patient.