Just because a current content/data-based process works doesn’t mean it’s efficient

Just because a current content/data-based process works doesn’t mean it’s efficient

New or consolidated systems should lead to better outcomes, so content migration pre-assessments are important to maximize the ROI.

Whether the goal is digital transformation, system consolidation or moving to a new content management system – if you’re going to spend a lot of money on a new IT project it should be with a view to delivering something tangibly better.

Too often departmental teams have become so adept at process workarounds to assemble or manage content, however, that they lose sight of what’s possible. As a result, when they are asked to give an overview of their current systems and ways of working, they tend to be overly optimistic about the caliber and integrity of the content that will need to be transferred to the new system.

This creates risk, as content migration projects are scoped, planned and costed on the back of these insights.

It’s quite odd, when you think about it, that such pivotal projects – which may involve critical Regulatory, Clinical or Quality systems – should be left to chance in this way. No airline or pilot would embark on a transatlantic flight without first checking for expected weather events, happy to simply react and make adjustments once hurricane conditions present themselves. And yet companies fix budgets and set deadlines for projects that have been scoped with only partial knowledge of the conditions that will be encountered. They prepare for a smooth ride, yet in nine cases out of 10 experience something altogether more turbulent.

Apples & oranges

In the aftermath of a merger/acquisition, it’s expected that blending systems will throw some issues if the technology platforms differ, the object models don’t match, or if the receiving/lead company does not have direct insights into the scale and integrity of the incoming systems of record.

But even within one company, there are likely to be corrupt, inaccurate, incomplete or out-of-date files, or differences in data model, which will continue to cause issues if migrated without remediation to a new platform or system.

And it is far better to understand the scope and scale of such issues before a content migration project takes form. The danger, otherwise, is that an already sizable undertaking will multiply as change order after change order is pushed through, with the result that ‘best case’ deadlines and budgets are far exceeded.

Warning signs

So how can you tell if you are likely to encounter such issues?

Clues to a sub-optimal starting point might include:

  • Over-reliance on highly manual or protracted processes, often involving multiple people, to prepare and submit a document;
  • Dependence on file shares or non-managed systems to locate information; and/or
  • The need to regularly plug gaps in content by chasing down additional detail;
  • Not being sure to what the actual number of documents there are that are required in the new system.

Don’t rely on guesswork

The only reliable way to scope content migration work is to engage the specialists ahead of time. Giving them an opportunity to look over the data themselves, ask the right questions, gather the correct number of documents in scope, and conduct a gap analysis between the data models of the old and new systems, will ensure that the formal migration project is scoped and designed optimally.

From all of this knowledge, for instance, and with a clearer idea of how content is typically organized, those that will later be tasked with performing the migration service will be able to architect the best approach – both tactically and strategically.

Considerations include:

  • How much data/content is earmarked to be migrated (and which data/content is beyond the scope of this project)?
  • Where is the data/content coming from, and where is it going to?
  • Which data models are involved in the old and new state?
  • How many data/content attributes exist in the old and new system?
  • What are the risks associated with a poor or badly scoped migration?
  • Where are the gaps/differences between the old and new models, and what will be needed to address them?
  • Given all of the known parameters, will a phased, or ‘big bang’ approach work best?

Forewarned is forearmed

Strategic pre-assessments, which can also be thought of as Phase 0 in the fuller context of a system migration, are an investment in a tight, focused, and hopefully expedited main project.

As a rule of thumb, we recommend allowing 6-8 weeks ahead of the core undertaking. During this time a project manager, migration lead, and business analyst will conduct a thorough analysis of all of the variables and propose a migration approach that will deliver maximum value.

This pre-assessment can be conducted entirely remotely.

Involving the execution team ahead of time also starts to build a strong relationship and understanding of the context of the migration, setting expectations on both sides. All of which should contribute to and build confidence in a smooth project delivery.

To discuss your own data migration journey, please fill out this contact form and we’ll put you in touch with our experts.

Lessons learnt from life sciences content migrations

Lessons learnt from life sciences content migrations

3 common pain points and how to avoid them. As I explored recently, digital transformation can surface a series of perplexing data challenges for most organizations, but particularly those operating in life sciences. Such has been the inertia in the sector previously around system modernization that, when a project is finally approved and chartered, stakeholders often rush toward the benefits without looking back. This can result in disappointment and frustration when the ‘as is’ data migrated to the new set-up turns out to be in a poor state and largely unusable.

In this blog, I want to focus on 3 particular takeaways from such experiences which companies can extrapolate from to avoid making the same mistakes.

1.     Technology is just one piece of the puzzle in a successful system migration

The successful execution of any project involves equal focus on people, process, and technology, so that everything is aligned to deliver the expected outcomes. Certainly, it’s as important to line up sufficient resources to plan and manage the transition, and to build engagement and momentum among all stakeholders and users, as well as any new skills and resources they might need.

But another element that’s often neglected is the data the new system will draw on to deliver the expected process streamlining, and improved visibility. However fast and feature-rich the new technology platform, if it’s dependent on old and inadequate data quality it won’t be able to fulfill its promise. If the new system can’t fulfil future-state governance or meet new standards for regulatory compliance, then the ‘retro-fitting’ burden could be immense as teams try to massage and improve the data after the fact. Unplanned data enrichment is hugely time-consuming.

The lesson here is about scoping system projects thoroughly and planning them early enough so that every dimension is well catered for in plenty of time.

If delivery is scheduled close to a deadline for adhering to new health authority submissions standards, companies will want to be sure that the data coming across is in good shape and fit for purpose to allow that deadline to be confidently met. Filling CMC Module III in eCTD submissions is already a hot spot, highlighting where existing system data typically falls short. Companies can learn from this by doing their due diligence and performing a detailed data assessment up front, to help them mitigate these kinds of risks.

2.     Time & resources are critical success factors

Unless a project has a sufficient runway leading up to it, pressures will mount, and good intentions are likely to give way to compromise and the cutting of corners. Even if they plan to bring in external help, the key project stakeholders will need to set aside their own people’s time to do the vital groundwork. That’s in understanding old and new data governance parameters, so that any data preparations and actual data migration is in line with current and emerging requirements. Without those parameters, and a clear picture of the ‘as is’ status, project teams risk investing their time and budgets in the wrong places and setting themselves up for a big clean-up operation later on post migration.

So, even before performing a data quality assessment, it’s a good idea to seek a bit of preliminary strategy advice from a trusted expert – almost as a Phase 0 – to understand the bigger picture and how everything needs to align to deliver against it.

This isn’t about engaging specialists prematurely, but rather about making sure that any investment that follows (and any external help brought in) is well targeted, so delivering maximum value.

3.     It’s important to allow and plan for failure

Despite the best intentions, projects can go awry due to the many moving parts that make up the whole. So, it’s important to factor ‘the unexpected’ into all planning.

This includes allowing for a certain number of iterations based on the finding of data quality assessments, to get the data to fit the required data governance standards going forward. If the data coming across is in a disastrous state, a planned migration phase duration could quickly turn into a material schedule delay. Underestimating the work involved is very common. I have seen this in many client projects. For example, if the ‘happy path’, where everything goes to plan, was expected to take 10-12 months, the real-life route situation took 18 months– so to de-risk the project, allow for contingency. If in doubt, take the forecast number of days and double it.

All of the preparatory work recommended above should help contain delays and protect against timescale or budget shocks, but it’s better to plan properly so that the journey goes as smoothly as it can. Although, ultimately, the client company is responsible for the quality and integrity of its own data, technology vendors and service providers will need to plan their involvement too and ensure they have the right skilled resources available at the right time.

Our experts can provide guidance on all of the above. Ultimately, this is about setting the right expectations and a realistic schedule, and resourcing projects so that they optimize the time to value. A little foresight can go a long way.

To discuss your own data migration journey, please fill out this contact form and we’ll put you in touch with our experts.

 

Digital transformation in life sciences: ensuring data is fit for future purpose

Digital transformation in life sciences: ensuring data is fit for future purpose

In life sciences, such considerations are often underestimated. Organizations are so ready to untether themselves from the complexity and constraints of old legacy systems that they can become distracted from other factors which need to be considered to get the most out of their new investment.

Priority goals may include transforming the way they manage and work with regulatory information management, to drive greater efficiency, accuracy, visibility as well as compliance with the evolving demands of regulators. But the scope of even the most dynamic new platform or system will be dependent on, and limited by, the business data available. If that data has material gaps in it, contains significant duplication and/or errors, or is not aligned with the fields and formats required for target/future use cases in conjunction with the data governance strategy, even the best-planned project will not deliver effectively.

Start by considering what’s possible

As life sciences organizations form their digital transformation strategies and reset their goals, it’s important that they understand the potential opportunity to streamline and improve associated processes – and the way that these new or reframed processes will harness data to deliver step changes in execution and output.

One opportunity, for example, could be to transform the way companies manage their product portfolios – via a more dynamic, finer-grained definition of that portfolio and an end-to-end view of its change management, registration/licensing and commercial status in every market globally.

Another is to harness regulatory information (RIM data) to streamline the way a whole host of other functions plan and operate. There’s a lot of interest now in flowing this core data more fluidly into processes beyond Regulatory Affairs – such as Clinical, Manufacturing, Quality, Safety, and Pharmacovigilance. Rather than each function deploying and managing its own applications and data set to serve a single purpose, as has been largely the case up to now, the growing trend is to take a cross-functional platform approach to data, change, and knowledge management. This means that each team can draw on the same definitive and live information set to fulfil their business need.

All of this is much more efficient, as well as less error prone – because similar or overlapping data is not being input many different times, in slightly different ways. This, in turn, will expose companies to much lower risk as regulators like EMA start to require simultaneous data-and-document based submissions for marketing authorizations and variations/updates, which inevitably will see them implement formal cross-checks to ensure information is properly synchronized and consistent.

There are no shortcuts to rich, reliable data

The process transformation opportunities linked to all the above are considerable, and they are exciting. However, they rely on the respective teams understanding and harnessing that potential through advanced, proactive planning. By agreeing, collectively, on the scope for greater efficiency, and on the strategic advantages that are made possible through access to more holistic intelligence and insights, teams can start to move together toward a plan that will benefit everyone.

Practically, this will require an investment of time and thought, considering the state and location of current data, and what will need to happen to it to ensure that it is of sufficient quality, completeness and multi-purpose reusability to support improved processes in the future state. Unquestionably, this will also require a considerable amount of targeted work to ensure existing data is aligned and of high quality; that it uses agreed vocabularies; and consistently adheres to standardized/regulated formatting, data governance, and naming conventions.

Source expert help as needed

All of this may sound like a lot of “heavy lifting”, but it is exactly the kind of activity our experts can advise on. We can start by helping life sciences companies put together a strategy based on how data will ideally be used in the future state, and what needs to happen to it to prepare it for migration.

Working alongside the various business subject-matter experts (e. g. the people closest to the product portfolios and the processes involved in managing these), we’ll help scope the work involved and the resources that will be required. We can also help to determine the historical, current, and future role of respective data, so that only active data is prioritized for preparation for migration to the new system or platform (in terms of refactoring/clean-up/enrichment).

Forewarned is forearmed, as they say. Although preparing data so that it’s migration-ready may sound like an onerous undertaking, it is far better to know this and be able to do something about it ahead of time, than to be caught out once a critical technology project is already well advanced – by which time fundamental data transformation considerations may be too late.

To sound out our experts about data preparations needed for an upcoming new systems project, please get in touch, I’m happy to support you.

Get in touch with us

 

Don’t be sidetracked by EMA’s DADI curveball: data is still the goal

Don’t be sidetracked by EMA’s DADI curveball: data is still the goal

Yes, the initial plans have altered to buy everyone a bit more time, but the broader plan is still on track – to make product data submissions at least as important as electronic documents (and ultimately the priority/default) in all communications with the regional Regulator.

In other words, whatever tweaks companies and software vendors make to their roadmap and immediate capabilities over the next few months, these should not detract from – nor dilute – the overarching plan to get regulated product data in complete, consistent and up-to-date order.

That’s so that when it is time to migrate to – and go live – with an optimized new regulatory information/submissions management platform or system, there is comprehensive, compliant, high-quality data ready to run across it.

And of course, the foundational work done now will stand companies in good stead for when other regions across the world implement their own take on IDMP – given that dynamic data exchange is where all of this will go, globally, in due course.

What’s changed, and how does it affect you?

To recap what’s changed in the interim, EMA’s Digital Application Dataset Integration (DADI) interface – a project that has been evolving alongside IDMP and SPOR (see https://esubmission.ema.europa.eu/ for more information) – will in the short term, from April 2023, serve as the means for entering electronic application forms through interactive web forms.

This will enable companies to pull data from the PMS (that has been migrated from XEVMPD and other EMA databases) and to get out the human-readable PDF format and machine-readable XML format. Both formats will be submitted along with the eCTD; this part of the process will not change.

This latest development is an important step toward the IDMP goal of reusing data from PMS, and the first step toward the IDMP standardization of data. EMA will support this approach for variation forms only at this point, extending it to initial applications and renewals later.

Ultimately, EMA’s plan is for standardized medicinal product data currently held in the xEVMPD database to be enriched for the PMS, where fuller, more granular, IDMP-compliant medicinal product detail will be kept and updated over time.

There are some practical challenges still to be worked out, such as how IDMP detail that is currently missing will be added to PMS, and how internal company RIM systems and EMA’s PMS database will get to a point of being able to exchange data more seamlessly without requiring manual data re-entry. But for now, this is a chance for companies to update and correct their data with EMA’s dictionary through the familiar XEVMPD process. (Ultimately, FHIR – the global industry standard for passing healthcare data between systems – will support more dynamic data exchange/sync’ing between companies’ RIM systems and EMA’s PMS.)

Rather than play for time, here are 4 opportunities that the interim DADI move makes possible, as well as 5 next steps that companies should take to stay on track with their data preparations:

4 benefits to exploit

  1. The use of the DADI interface for getting data from the EMA PMS allows life sciences companies and software providers to take a breath as they prepare for full-scale IDMP implementation and compliance. FHIR-based submissions via API have been pushed back for now (this will still happen, just not within the next year).
  2. The industry is now less dependent on immediate technology changes. There is no need for their RIM systems to support DADI, as at this point data won’t flow directly between RIM records and EMA’s PMS.
  3. The EMA’s roadmap allows for implementation to happen in manageable chunks. EMA’s ‘DADI first’ approach allows for Product (PMS) data re-use, and accounts for the largest proportion of regulatory submissions.
  4. This is a chance to reset or adapt IDMP/regulatory data strategies, catch up, and prepare to deliver maximum benefits and efficiencies from the preparations (e. g. by doing sufficient groundwork to enable a confident system migration, when the time comes).

5 things to do next, for pharma companies

  1. Set or re-set your strategy and position around regulatory, structured data.
  2. Collect and assess product data and prepare this for compliance (scoping and getting stuck into any data enrichment now) – so that it addresses the granularity of IDMP requirements and maps to EMA’s dictionaries/vocabularies.
  3. Prepare to support xEVMPD e-submissions based on the new data model and all of the levels of detail that are expected, to be ready for the future and to enable a rapid transition to IDMP.
  4. Improve your ability to respond and adapt quickly to further changes to regulatory requirements. EMA’s switch to using DADI to submit data to the PMS highlights just how swiftly the roadmap can change, and why an Agile approach to project management is so important.
  5. Start to migrate your content into the new target system as soon as possible. If you have started with the collection of data in Excel files locally, this data could become outdated if not maintained. Don’t leave thoughts of migration until the last minute. Plan for this now, as part of your overall scoping work.To maximize your IDMP system migration, or discuss your best route to IDMP data preparation as you plan for this, please fill out the contact form below and we’ll put you in touch with our experts.
    Get in touch with us

 

Beware the inflated promises of AI in accelerating data migration

Beware the inflated promises of AI in accelerating data migration

In many cases, this means updating or establishing new systems and migrating huge volumes of content across to the new environment – and supplementing or enriching this data in the process so that it better meets ongoing needs and to be aligned with IDMP controlled value lists. Effective migration is likely to involve locating and transferring information from hundreds of thousands of content files currently residing in rudimentary file shares, where a lot of potentially valuable data currently exists in unstructured form within single-use documents.

Given the scale of the task before them, and the scarcity of spare capacity to oversee the work manually, it is easy to appreciate why Regulatory teams and supporting IT departments might look to artificial intelligence (AI) as a means of expediting the data extraction and enrichment process, as companies look to convert unstructured information into searchable and re-usable data assets in the new target system.

Managing expectations

Certainly, AI specialist tool and service providers have made some pretty lofty promises about the technology’s potential, accuracy, and scope. With training, they say, machine learning solutions can hit 95 per cent accuracy in finding, identifying, tagging and lifting the information that is needed from commonly-used documents and other unstructured content sources. To an overstretched RA team drowning in an ocean of material, spanning metaphorical warehouses and continents in its product coverage, this promise of reliable task automation is undeniably appealing.

BUT — and there is a huge caveat here – 95 per cent accuracy, even if attainable, is still too risky for validated use cases, such as regulatory submissions preparation and management. The trouble with AI algorithm performance monitoring is that it is all statistics and trends based: details of where it is doing well or less well are much vaguer. In other words, while 95 per cent overall accuracy might sound impressive, the margin of error remains all too great if no one can be quite sure where any gaps or errors are arising. And if humans have to go through everything to check, any time and labor saving to this point will have been for nothing.

Don’t despair: it’s not a case of all or nothing

This needn’t be cause for outright disillusionment, though. For one thing, there are rules-based processes that provide more predictability than AI, which can be used instead to assume much of the legwork while retaining the assurance of human quality control.

Meanwhile, AI tools and techniques can play a useful part in non-validated content management – for example, for enriching/adding metadata to archived content which is no longer used in live submissions, but which has to be retained (e.g. for anything from 10 to 25 years) for compliance reasons. Here, smart automation offers a way to breathe new life and value into legacy records, rendering them more immediately searchable and useful. If, as part of an AI-driven data enrichment/meta-tagging exercise, 5% of the content is missed or indexed incorrectly, someone can perform a manual search or manual checks without any risk to submissions performance, marketing authorization status, or patient safety.

As ever, it’s a case of horses for courses, and for now AI promises more than it can deliver for validated regulatory content migration purposes. But that doesn’t mean there isn’t an alternative to sheer manual graft, and you can count on fme to harness the most effective tools and processes for each project.

Get in touch with us

 

Content consolidation: what do you really need from a unified DMS?

Content consolidation: what do you really need from a unified DMS?

It could be the accelerating pace of technology change, which has created a risk of systems being unsupported and left behind.Or perhaps it’s the external market and its growing demand for agility and deftness – which is hard to achieve when critical content is strewn across the global organization – that is fueling the desire for transformation.

The chances are, it’s a combination of all three scenarios that’s triggering the change now being considered.

Calculate what you have now – and where you want to get to

A successful DMS consolidation or content-based digital transformation project starts with understanding the primary business objectives and the strategic emphasis of the initiative.

To help scope the project, think carefully about – and/or seek professional help to determine – what you’re trying to achieve and why, and therefore who needs to be involved in any decision-making.

This process will also help with defining the project parameters, and in identifying and excluding data and content that doesn’t need to be moved across to the new, modern system or platform. In other words, it will help sift out the information and ‘paperwork’ that can be deleted, archived in a cheaper system, or left where it is. Trying to move everything across to the new set-up could waste valuable time and budget, without delivering any benefit – especially if that content has little in the way of metadata/smart tagging to aid future filing and rediscovery.

Categorize your content

Typically we group a company’s documentation into four main categories: operational documents (e.g. system-supporting documentation and project documentation); organizational documents (e.g. policies, procedures and SOPs); historical documents (which could span categories 1 and 2, documents that are no longer current and are being retained primarily for compliance reasons); and ‘unknown’ documentation (falling under any of the above categories, potentially the result of documents having been incorrectly stored or labelled, or inherited as part of a company acquisition).

Understanding the current format of all of this content will be useful too – what proportion is in paper format with wet signatures, for instance; and, where there are scanned PDFs stored in file-shares, are these viable as primary records?

Be ruthless in deciding what to transfer

As teams classify their content and establish their current state, they will begin to build a picture of documentation’s relative importance. This in turn will help inform the requirement of the new centralized system/unified platform, and – by extension – the preparation and migration work that will be involved in cleaning up and de-duplicating content; checking or adding metadata; and migrating everything to the new set-up.

By sorting documentation from across the organization into formal/less formal/informal content, and quantifying it, companies will start to gain clearer insight into the new system capacity they will need (both now and in the future), and how much time and budget to allow for the content verification, preparation and migration work.

Understanding the role and relative importance of each category of content will also help inform any automated treatment of information and documents in the new system – in keeping with data protection/records retention policy enforcement and tracking – across the lifecycle of each asset.

Setting expectations, identifying potential solutions

With a clear idea of the scope and scale of content migration requirement, as well as the long-term capacity and capabilities required of the new system, the process of going out to tender should be much more streamlined – because the business will have a good grasp of what a fit-for-purpose solution should look like.

But none of this will guarantee a perfect match. To achieve a streamlined single port of call and source of truth for company content, companies must also go in with realistic expectations and an understanding of what they may need to give up in return (such as obsolete legacy investments and bespoke, in-house systems).

In sacrificing and writing off older capabilities, companies will be in a better position to benefit from smarter integration; modern, agile project management options; and the opportunities to be inherently more ‘data driven’ and conformant with the latest industry regulations.

Awareness of, and provision for, emerging and longer-term requirements will be vital to securing a futureproof new setup, meanwhile. This includes ‘cloud readiness’ if companies still aren’t quite prepared to make that leap today with their new platform (a scenario which is much rarer now).

Last but not least, successful project delivery will depend on all relevant business stakeholders and subject-matter experts being included on the transformation journey from day one. As well as maximizing buy-in and acceptance of the transition, this will ensure that processes can be optimized. If, today, there are multiple incompatible processes for managing regulatory registrations, for example, the teams involved can discuss and work towards commonality, so that all are able to benefit fully from the new centralized content resource.

As ever, preparation is everything in ensuring successful project delivery and our experts are on hand to advise on any aspect of this critical scoping work as companies look to a more dynamic content management future.

Get in touch with us