Highlights of Generis and fme’s “Data-centricity in RIM” Webinar

Highlights of Generis and fme’s “Data-centricity in RIM” Webinar

In October, fme’s Director of Business Consulting David Gwyn was a featured contributor in an informative webinar with the Generis CARA Life Sciences team. He was able to share his rich experience and perspective on the value of a data-centric approach to document and information management, and outline some of the benefits that can be realized across an organization.

Generis also provided a comprehensive demo of their CARA Life Sciences Platform, and how it can improve quality, efficiency, consistency, and scalability across any organization.

Below is a summary of David’s introduction, an outline of the webinar, and a highlight video of the presentation. View the full webinar on the Generis site, and contact us with any questions you have about data-centricity or the CARA Life Sciences Platform.

 

Summary of Data-Centricity Introduction

David Gwyn: I’d like to speak for a few minutes on the essential concept of data-centricity. What I mean by that is how we can reshape our thinking about documents and the other entities we’re managing beyond traditional paper. Right now we all have an opportunity, and I will argue a necessity, to change the way we’re thinking about the information we’re managing and move forward into a more data-centric and data-focused approach.

I’m sure you remember the days where we would produce reams and reams of paper that we’d stack on a conference room table and ask the CEO to come in and sign the 356H. Eventually we said “Let’s digitize this, so we took our Word documents that we printed out before and turned them into PDFs. While this was a step forward, we really just took pictures of all those documents and obfuscated all the value. All the data that was there was buried in a digital version of the paper process.

There are much better solutions now that eliminate traditional challenges, and provide extensive improvement to quality, efficiency, consistency, and scalability across your entire organization. Let’s look at what’s possible.

Data-Centricity Webinar Outline

  • Overview of a document-centric progress
  • Impacts of document focus
  • Brief History of Medicinal Product Submissions
  • What is triggering the transition to digitalization of the process?
    • Regulations, data standards, compliance
    • Improve quality, efficiency, consistency
    • Enable scalability, promote quality, endure changing landscapes
  • Characteristics of a data-driven approach
  • Benefits of data-centric process
  • Questions to ask to prepare for a transition to a data-centric approach
  • Detailed demo of Generis CARA Life Sciences Platform

Watch the Webinar Highlights

 

For more information, please complete this contact form and we’ll put you in touch with our experts.

 

Why wait for a big bang consolidation project to connect teams to the data and content they need?

Why wait for a big bang consolidation project to connect teams to the data and content they need?

When companies merge, it can take several years to consolidate IT systems and phase out duplicate, outdated or incompatible applications. And, in the meantime, business must continue as usual, which means that teams (existing, blended, or new) will need timely access to widely dispersed information and documents that pre-date the acquisition, alongside any new data and content.

One relatively painless and inexpensive way to enable this is through a cloud-hosted, subscription-based integration service, which makes it possible to form reliable interconnections and data/content exchange pathways between old and current systems.

Instead of having to log into multiple different systems to gain a clear, up-to-date picture of a current scenario, business users can simply go into their primary application which gives them a view of the connected data and documents they need.

Out-of-the-box connectivity

The key to such a facility is a series of web services/system connectors, and low code as needed, to link to and exchange data with other platforms. At fme, we have ready-to-go connectors to all of the popular content repositories, from OpenText Documentum and SharePoint generically, to CARA and Veeva at a more specialist level (e.g. for Life Sciences Regulatory information or Quality management). This shortens the time to information access considerably.

This easy integration proposition is a great option for any situation where time is ticking on the transitional service agreement. It allows the relevant datasets to be ‘lifted and shifted’ when there just isn’t the bandwidth to engage in large-scale system consolidation or data migration initiatives before the cord is cut with the old set-up.

As long as there is an application programming interface (API), we can build a connector into any retiring system, allowing data and content to be pushed and pulled between this and the new target system(s), whether on a timed schedule or on demand – even without the SaaS option.

Buying your company vital time

One of the main benefits of taking an integration approach as an interim solution to system consolidation/replacement is that it buys a company time before it has to untangle a multitude of systems or put in something new, such an all-singing-all-dancing RIM system. While they are evaluating their best options, they can continue operating at full capacity, knowing that users have access to live and accurate information to support their current tasks.

fme’s integration-center can be deployed in a client’s own cloud environment, or provided as a hosted, subscription-based microservice.

This deployment model, coupled with our deep and extensive expertise in enterprise content management (ECM) system implementations, and our pre-packaged, technology-aware accelerators (connectors), mean we can often get users up and running with a fully connected environment within just a couple of months. (An environment that can be quickly switched off again once any bigger and more permanent project has been deployed.)

It’s easy to become overwhelmed with all of the various activities, with the result that companies do not know where to start. fme’s Integration Consultants can provide invaluable advice here, to help teams determine which data they most need to access in the short term and prioritize the content that is really important.  this contact form and we’ll put you in touch with our experts.

 

For more information, please complete this contact form and we’ll put you in touch with our experts.

 

Lessons learnt from life sciences content migrations

Lessons learnt from life sciences content migrations

3 common pain points and how to avoid them. As I explored recently, digital transformation can surface a series of perplexing data challenges for most organizations, but particularly those operating in life sciences. Such has been the inertia in the sector previously around system modernization that, when a project is finally approved and chartered, stakeholders often rush toward the benefits without looking back. This can result in disappointment and frustration when the ‘as is’ data migrated to the new set-up turns out to be in a poor state and largely unusable.

In this blog, I want to focus on 3 particular takeaways from such experiences which companies can extrapolate from to avoid making the same mistakes.

1.     Technology is just one piece of the puzzle in a successful system migration

The successful execution of any project involves equal focus on people, process, and technology, so that everything is aligned to deliver the expected outcomes. Certainly, it’s as important to line up sufficient resources to plan and manage the transition, and to build engagement and momentum among all stakeholders and users, as well as any new skills and resources they might need.

But another element that’s often neglected is the data the new system will draw on to deliver the expected process streamlining, and improved visibility. However fast and feature-rich the new technology platform, if it’s dependent on old and inadequate data quality it won’t be able to fulfill its promise. If the new system can’t fulfil future-state governance or meet new standards for regulatory compliance, then the ‘retro-fitting’ burden could be immense as teams try to massage and improve the data after the fact. Unplanned data enrichment is hugely time-consuming.

The lesson here is about scoping system projects thoroughly and planning them early enough so that every dimension is well catered for in plenty of time.

If delivery is scheduled close to a deadline for adhering to new health authority submissions standards, companies will want to be sure that the data coming across is in good shape and fit for purpose to allow that deadline to be confidently met. Filling CMC Module III in eCTD submissions is already a hot spot, highlighting where existing system data typically falls short. Companies can learn from this by doing their due diligence and performing a detailed data assessment up front, to help them mitigate these kinds of risks.

2.     Time & resources are critical success factors

Unless a project has a sufficient runway leading up to it, pressures will mount, and good intentions are likely to give way to compromise and the cutting of corners. Even if they plan to bring in external help, the key project stakeholders will need to set aside their own people’s time to do the vital groundwork. That’s in understanding old and new data governance parameters, so that any data preparations and actual data migration is in line with current and emerging requirements. Without those parameters, and a clear picture of the ‘as is’ status, project teams risk investing their time and budgets in the wrong places and setting themselves up for a big clean-up operation later on post migration.

So, even before performing a data quality assessment, it’s a good idea to seek a bit of preliminary strategy advice from a trusted expert – almost as a Phase 0 – to understand the bigger picture and how everything needs to align to deliver against it.

This isn’t about engaging specialists prematurely, but rather about making sure that any investment that follows (and any external help brought in) is well targeted, so delivering maximum value.

3.     It’s important to allow and plan for failure

Despite the best intentions, projects can go awry due to the many moving parts that make up the whole. So, it’s important to factor ‘the unexpected’ into all planning.

This includes allowing for a certain number of iterations based on the finding of data quality assessments, to get the data to fit the required data governance standards going forward. If the data coming across is in a disastrous state, a planned migration phase duration could quickly turn into a material schedule delay. Underestimating the work involved is very common. I have seen this in many client projects. For example, if the ‘happy path’, where everything goes to plan, was expected to take 10-12 months, the real-life route situation took 18 months– so to de-risk the project, allow for contingency. If in doubt, take the forecast number of days and double it.

All of the preparatory work recommended above should help contain delays and protect against timescale or budget shocks, but it’s better to plan properly so that the journey goes as smoothly as it can. Although, ultimately, the client company is responsible for the quality and integrity of its own data, technology vendors and service providers will need to plan their involvement too and ensure they have the right skilled resources available at the right time.

Our experts can provide guidance on all of the above. Ultimately, this is about setting the right expectations and a realistic schedule, and resourcing projects so that they optimize the time to value. A little foresight can go a long way.

To discuss your own data migration journey, please fill out this contact form and we’ll put you in touch with our experts.

 

Digital transformation in life sciences: ensuring data is fit for future purpose

Digital transformation in life sciences: ensuring data is fit for future purpose

In life sciences, such considerations are often underestimated. Organizations are so ready to untether themselves from the complexity and constraints of old legacy systems that they can become distracted from other factors which need to be considered to get the most out of their new investment.

Priority goals may include transforming the way they manage and work with regulatory information management, to drive greater efficiency, accuracy, visibility as well as compliance with the evolving demands of regulators. But the scope of even the most dynamic new platform or system will be dependent on, and limited by, the business data available. If that data has material gaps in it, contains significant duplication and/or errors, or is not aligned with the fields and formats required for target/future use cases in conjunction with the data governance strategy, even the best-planned project will not deliver effectively.

Start by considering what’s possible

As life sciences organizations form their digital transformation strategies and reset their goals, it’s important that they understand the potential opportunity to streamline and improve associated processes – and the way that these new or reframed processes will harness data to deliver step changes in execution and output.

One opportunity, for example, could be to transform the way companies manage their product portfolios – via a more dynamic, finer-grained definition of that portfolio and an end-to-end view of its change management, registration/licensing and commercial status in every market globally.

Another is to harness regulatory information (RIM data) to streamline the way a whole host of other functions plan and operate. There’s a lot of interest now in flowing this core data more fluidly into processes beyond Regulatory Affairs – such as Clinical, Manufacturing, Quality, Safety, and Pharmacovigilance. Rather than each function deploying and managing its own applications and data set to serve a single purpose, as has been largely the case up to now, the growing trend is to take a cross-functional platform approach to data, change, and knowledge management. This means that each team can draw on the same definitive and live information set to fulfil their business need.

All of this is much more efficient, as well as less error prone – because similar or overlapping data is not being input many different times, in slightly different ways. This, in turn, will expose companies to much lower risk as regulators like EMA start to require simultaneous data-and-document based submissions for marketing authorizations and variations/updates, which inevitably will see them implement formal cross-checks to ensure information is properly synchronized and consistent.

There are no shortcuts to rich, reliable data

The process transformation opportunities linked to all the above are considerable, and they are exciting. However, they rely on the respective teams understanding and harnessing that potential through advanced, proactive planning. By agreeing, collectively, on the scope for greater efficiency, and on the strategic advantages that are made possible through access to more holistic intelligence and insights, teams can start to move together toward a plan that will benefit everyone.

Practically, this will require an investment of time and thought, considering the state and location of current data, and what will need to happen to it to ensure that it is of sufficient quality, completeness and multi-purpose reusability to support improved processes in the future state. Unquestionably, this will also require a considerable amount of targeted work to ensure existing data is aligned and of high quality; that it uses agreed vocabularies; and consistently adheres to standardized/regulated formatting, data governance, and naming conventions.

Source expert help as needed

All of this may sound like a lot of “heavy lifting”, but it is exactly the kind of activity our experts can advise on. We can start by helping life sciences companies put together a strategy based on how data will ideally be used in the future state, and what needs to happen to it to prepare it for migration.

Working alongside the various business subject-matter experts (e. g. the people closest to the product portfolios and the processes involved in managing these), we’ll help scope the work involved and the resources that will be required. We can also help to determine the historical, current, and future role of respective data, so that only active data is prioritized for preparation for migration to the new system or platform (in terms of refactoring/clean-up/enrichment).

Forewarned is forearmed, as they say. Although preparing data so that it’s migration-ready may sound like an onerous undertaking, it is far better to know this and be able to do something about it ahead of time, than to be caught out once a critical technology project is already well advanced – by which time fundamental data transformation considerations may be too late.

To sound out our experts about data preparations needed for an upcoming new systems project, please get in touch, I’m happy to support you.

Get in touch with us

 

Don’t be sidetracked by EMA’s DADI curveball: data is still the goal

Don’t be sidetracked by EMA’s DADI curveball: data is still the goal

Yes, the initial plans have altered to buy everyone a bit more time, but the broader plan is still on track – to make product data submissions at least as important as electronic documents (and ultimately the priority/default) in all communications with the regional Regulator.

In other words, whatever tweaks companies and software vendors make to their roadmap and immediate capabilities over the next few months, these should not detract from – nor dilute – the overarching plan to get regulated product data in complete, consistent and up-to-date order.

That’s so that when it is time to migrate to – and go live – with an optimized new regulatory information/submissions management platform or system, there is comprehensive, compliant, high-quality data ready to run across it.

And of course, the foundational work done now will stand companies in good stead for when other regions across the world implement their own take on IDMP – given that dynamic data exchange is where all of this will go, globally, in due course.

What’s changed, and how does it affect you?

To recap what’s changed in the interim, EMA’s Digital Application Dataset Integration (DADI) interface – a project that has been evolving alongside IDMP and SPOR (see https://esubmission.ema.europa.eu/ for more information) – will in the short term, from April 2023, serve as the means for entering electronic application forms through interactive web forms.

This will enable companies to pull data from the PMS (that has been migrated from XEVMPD and other EMA databases) and to get out the human-readable PDF format and machine-readable XML format. Both formats will be submitted along with the eCTD; this part of the process will not change.

This latest development is an important step toward the IDMP goal of reusing data from PMS, and the first step toward the IDMP standardization of data. EMA will support this approach for variation forms only at this point, extending it to initial applications and renewals later.

Ultimately, EMA’s plan is for standardized medicinal product data currently held in the xEVMPD database to be enriched for the PMS, where fuller, more granular, IDMP-compliant medicinal product detail will be kept and updated over time.

There are some practical challenges still to be worked out, such as how IDMP detail that is currently missing will be added to PMS, and how internal company RIM systems and EMA’s PMS database will get to a point of being able to exchange data more seamlessly without requiring manual data re-entry. But for now, this is a chance for companies to update and correct their data with EMA’s dictionary through the familiar XEVMPD process. (Ultimately, FHIR – the global industry standard for passing healthcare data between systems – will support more dynamic data exchange/sync’ing between companies’ RIM systems and EMA’s PMS.)

Rather than play for time, here are 4 opportunities that the interim DADI move makes possible, as well as 5 next steps that companies should take to stay on track with their data preparations:

4 benefits to exploit

  1. The use of the DADI interface for getting data from the EMA PMS allows life sciences companies and software providers to take a breath as they prepare for full-scale IDMP implementation and compliance. FHIR-based submissions via API have been pushed back for now (this will still happen, just not within the next year).
  2. The industry is now less dependent on immediate technology changes. There is no need for their RIM systems to support DADI, as at this point data won’t flow directly between RIM records and EMA’s PMS.
  3. The EMA’s roadmap allows for implementation to happen in manageable chunks. EMA’s ‘DADI first’ approach allows for Product (PMS) data re-use, and accounts for the largest proportion of regulatory submissions.
  4. This is a chance to reset or adapt IDMP/regulatory data strategies, catch up, and prepare to deliver maximum benefits and efficiencies from the preparations (e. g. by doing sufficient groundwork to enable a confident system migration, when the time comes).

5 things to do next, for pharma companies

  1. Set or re-set your strategy and position around regulatory, structured data.
  2. Collect and assess product data and prepare this for compliance (scoping and getting stuck into any data enrichment now) – so that it addresses the granularity of IDMP requirements and maps to EMA’s dictionaries/vocabularies.
  3. Prepare to support xEVMPD e-submissions based on the new data model and all of the levels of detail that are expected, to be ready for the future and to enable a rapid transition to IDMP.
  4. Improve your ability to respond and adapt quickly to further changes to regulatory requirements. EMA’s switch to using DADI to submit data to the PMS highlights just how swiftly the roadmap can change, and why an Agile approach to project management is so important.
  5. Start to migrate your content into the new target system as soon as possible. If you have started with the collection of data in Excel files locally, this data could become outdated if not maintained. Don’t leave thoughts of migration until the last minute. Plan for this now, as part of your overall scoping work.To maximize your IDMP system migration, or discuss your best route to IDMP data preparation as you plan for this, please fill out the contact form below and we’ll put you in touch with our experts.
    Get in touch with us

 

Beware the inflated promises of AI in accelerating data migration

Beware the inflated promises of AI in accelerating data migration

In many cases, this means updating or establishing new systems and migrating huge volumes of content across to the new environment – and supplementing or enriching this data in the process so that it better meets ongoing needs and to be aligned with IDMP controlled value lists. Effective migration is likely to involve locating and transferring information from hundreds of thousands of content files currently residing in rudimentary file shares, where a lot of potentially valuable data currently exists in unstructured form within single-use documents.

Given the scale of the task before them, and the scarcity of spare capacity to oversee the work manually, it is easy to appreciate why Regulatory teams and supporting IT departments might look to artificial intelligence (AI) as a means of expediting the data extraction and enrichment process, as companies look to convert unstructured information into searchable and re-usable data assets in the new target system.

Managing expectations

Certainly, AI specialist tool and service providers have made some pretty lofty promises about the technology’s potential, accuracy, and scope. With training, they say, machine learning solutions can hit 95 per cent accuracy in finding, identifying, tagging and lifting the information that is needed from commonly-used documents and other unstructured content sources. To an overstretched RA team drowning in an ocean of material, spanning metaphorical warehouses and continents in its product coverage, this promise of reliable task automation is undeniably appealing.

BUT — and there is a huge caveat here – 95 per cent accuracy, even if attainable, is still too risky for validated use cases, such as regulatory submissions preparation and management. The trouble with AI algorithm performance monitoring is that it is all statistics and trends based: details of where it is doing well or less well are much vaguer. In other words, while 95 per cent overall accuracy might sound impressive, the margin of error remains all too great if no one can be quite sure where any gaps or errors are arising. And if humans have to go through everything to check, any time and labor saving to this point will have been for nothing.

Don’t despair: it’s not a case of all or nothing

This needn’t be cause for outright disillusionment, though. For one thing, there are rules-based processes that provide more predictability than AI, which can be used instead to assume much of the legwork while retaining the assurance of human quality control.

Meanwhile, AI tools and techniques can play a useful part in non-validated content management – for example, for enriching/adding metadata to archived content which is no longer used in live submissions, but which has to be retained (e.g. for anything from 10 to 25 years) for compliance reasons. Here, smart automation offers a way to breathe new life and value into legacy records, rendering them more immediately searchable and useful. If, as part of an AI-driven data enrichment/meta-tagging exercise, 5% of the content is missed or indexed incorrectly, someone can perform a manual search or manual checks without any risk to submissions performance, marketing authorization status, or patient safety.

As ever, it’s a case of horses for courses, and for now AI promises more than it can deliver for validated regulatory content migration purposes. But that doesn’t mean there isn’t an alternative to sheer manual graft, and you can count on fme to harness the most effective tools and processes for each project.

Get in touch with us