Where you find opportunities to grow

* Can't find your brand, product, service? Join BPT Marketplace

[Interview] Shaping European AI Regulation To Secure Global Leadership In Healthcare

   by Andrii Buvailo    209    Comments 0

Disclaimer: All opinions expressed by Contributors are their own and do not represent those of their employers, or BiopharmaTrend.com.
Contributors are fully responsible for assuring they own any required copyright for any content they submit to BiopharmaTrend.com. This website and its owners shall not be liable for neither information and content submitted for publication by Contributors, nor its accuracy.

While the history of Artificial Intelligence (AI) field began in the distant 1950s, its practical value had largely remained limited all the way until the emergence of powerful hardware (GPUs) in the late 1990s. Other complementary technologies played an important role in the AI progress too: new data storage capabilities, cheap cloud infrastructures, advanced deep learning algorithms -- all these things became reality only in the 21st century, effectively setting AI field for exponential development trajectory and commercial practical utility. 

Today AI tech has matured to an extent it has become a strategic factor, a competitive differentiator, not only for individual companies but for the whole industries and countries. Needless to say, healthcare -- one of the major industries -- is an important end-user of AI technologies. Countries that care to adopt AI in their healthcare strategies today will have major competitive benefits for public health and safety tomorrow.

However, there is a serious concern that the European Union regulatory environment will become a barrier for AI innovation, leading to delays in the adoption of AI which can make the European healthcare sector less competitive globally.

To address such risks, the European Commission published three whitepapers proposing an overarching regulatory framework for the widespread application of Artificial Intelligence to the European ecosystem. TietoEVRY, a global IT and software company, recently published its position on the proposed AI regulation, as an effort to augment it with a more pragmatic, entrepreneurial, and innovative view.

I reached out to Dr. Christian Guttmann, VP and Head of Global AI & Data at TietoEVRY, and Executive Director of the Nordic AI Institute, and an established thought leader in the area of industrial AI adoption, to learn more about the new regulatory initiative by European Commission, TietoEVRY’s vision of it, and the impact the regulation will have on the European healthcare industry. 


 

Andrii: What’s the current role of AI in the European Life Sciences and Healthcare industries, and how would you describe its near term adoption trajectory? 

 

Christian: AI applications are just at the beginning of creating a major impact on the Life Sciences and Healthcare industries. All aspects of the end-to-end lifecycle will be influenced or changed. Take the pharmaceutical domain as an example. The speed of identification of effective molecules on pathogens can be drastically improved (therefore saving a lot of time) all the way to effective assessment of real-world evidence through data on online patient platforms and wearable devices. 

The near-term adoption of AI in a company is a cross function of at least two factors,

  • a regulatory framework that supports a company’s innovative and entrepreneurial ambitions, and 

  • a culture of “AI and Data thinking” that is fostered in a company. 

Regarding the regulation, in Europe, national governments and the EU commission have the responsibility to create a framework that needs to fulfill a number of criteria: enables European businesses to be competitive in the age of AI, create a sustainable economic framework (in the past, this was to create jobs, today governments have to create function and meaningful activities for citizens). This is not an easy obligation for a government in existing industrial areas, such as automotive or telecommunication. And it is an even more difficult task for an area such as AI. We can easily fill another interview with the topic of creating a culture of “AI and Data thinking”.

 

Andrii: What are the major issues with the existing EU regulatory framework that might be slowing down the progress of AI adoption in the European healthcare industry? 

 

Christian: The main issue with the current draft is that several elements are still unclear and irrelevant. Creating legislation that is predictable is important for the Life Sciences and Healthcare industries to make investments in creating products and services more certain. That is, a company is more likely to invest in new cancer or rare disease drug using AI, if you can calculate the Return of Investment with more certainty - unclear regulation creates confused investors, who then naturally pull out and seek more certain investment areas. One example of such uncertainty is that the legislation AI applications be divided into high risk and low risk. However, some unaddressed questions are: Who decides into which area an AI application falls? Are the entire industry sectors to be treated as high risk or low risk (e.g. health care)? 

 

Andrii. What will this new regulatory framework, proposed by the European Commission, mean for European AI strategy globally, and the European healthcare industry in particular? 

 

Christian: There are several considerations. Some argue that such a regulation acts as a protective barrier for non-European industries. I do not think that this is pursuit or should be pursuit as it could really backfire - Europe currently does not have a significant AI industry, or even tech and IT industry. So, perhaps a protective barrier could give us some time to catch up before international competitors will dominate our market. Another path forward is for Europe to lead the way in creating trust of AI by citizens. There are several proponents of this suggestion. However, creating trust is usually done in practice, that is, when the AI technology is doing what it meant to do. At the moment, we have far too many generalised scenarios that are extrapolated from isolated practical incidences. For example, a chatbot trained on Twitter from how humans engage with each other, and then produced a tweets which are not politically correct. That is of course not the way how chatbots are built in many scenarios. For the life science industry, it means that there is an oversight on the way AI applications are applied to selecting cohorts, identify molecules that are most likely candidates for being successful, and so on.

 

Andrii: TietoEVRY’s just published official position on the proposed new regulation, can you outline key principal modifications to the regulation that you believe will empower European AI leadership globally?

 

Christian: Absolutely. Here are a few highlights.

  • Any regulation on AI should be heavily informed by applications and use cases. An attempt to create a top-down preemptive regulatory framework would likely jeopardize innovative opportunities. 

  • The EU’s regulatory framework should facilitate a culture of utilizing and understanding the value of AI and data as well as create incentives for companies to innovate and effectively share AI and data. 

  • The right balance should be struck between the protection of privacy and enabling innovation in sectors where the use of anonymized data presents significant opportunities for European society. 

  • The EU should weigh the high risks of any application against the high opportunities and benefits that it presents for society.  

  • The main pitfall of the proposed risk-based approach is the uncertainty (from investors’ point of view) as to which AI applications will eventually fall into the heavily regulated high-risk category. 

  • The Commission’s decision to single out biometric identification and not to propose any regulatory solutions in this regard increases the uncertainty and risks from an investment point of view. TietoEVRY encourages the EU to adopt a clear position. 

  • TietoEVRY encourages the Commission to integrate into the upcoming regulatory framework the use of regulatory sandboxes, through which companies can test innovative AI solutions. 

 

Andrii. On a side note, how the ongoing coronavirus pandemics crisis is effecting AI adoption in the European healthcare industry? Do you see this situation as a major problem or as a lucrative opportunity for accelerated change? 

 

Christian: I have mixed experiences. As the corona crisis has been unfolding, many government entities, e.g. health and infectious disease authorities, clearly realised that they would have benefited from a higher degree of digitalisation and AI adoption. Take for example the benefit that we would have all experienced by effectively exchanging critical information across pathology units, hospitals, and infectious disease authorities. This was not the case as the processes were not digitally connected and largely still manually enforced. AI applications enable real-time automatic trend analysis and decisions on where and when to disperse protective equipment across Europe. Some of these realisations have led to several immediate requests from authorities to enable new AI solutions quickly. However, many were still not adopted as the regulation and frameworks were ambiguous or incomplete. That is, they were not ready enough for the health care industries and the public sector to be fully utilized.

Share this:              

You may also be interested to read:

 

Comments:

There are no comments yet. You can be the first.

Leave a Reply

Your email address will not be published. Required fields are marked *