Successful global data transformation projects start with strong global teams

Successful global data transformation projects start with strong global teams

It could be the accelerating pace of technology change, which has created a risk of systems being unsupported and left behind.Or perhaps it’s the external market and its growing demand for agility and deftness – which is hard to achieve when critical content is strewn across the global organization – that is fueling the desire for transformation.

The chances are, it’s a combination of all three scenarios that’s triggering the change now being considered.

1. Celebrate diversity

One of the big advantages of a diverse global team is the range of perspectives it brings to a new initiative. Successful project teams embrace that diversity and treat everyone as equally important to the delivery outcomes, irrespective of their relative seniority or location. There should be no ‘we’ versus ‘they’: to maximize results, it’s important to get everyone working together as one team in which everyone is – and feels – valued.

To achieve this, foster a genuine interest in and appreciation for the cultural differences across the global team. This means factoring in variances in communications preferences to encourage everyone to contribute fully, and respecting differences in preferred working hours – as well as personal boundaries (e.g., evenings/weekends/holidays/family time being sacred).

2. Keep everyone in the loop – and communicate purposefully

Effective communication takes commitment, time and hard work. It’s something that needs to be done continuously and proactively. Invest energy in meetings and, even if travel remains difficult, aim for face-to-face contact where possible and use cameras if sessions need to be conducted via Teams, Zoom, etc. In addition to client/user meetings, schedule weekly team calls which allow colleagues to freely exchange ideas and air issues without the client being present. For more on this topic, please see Dirk Bode’s blog Success Factors for Effective and Efficient Meetings.

It’s important to accept failure, too, promoting a willingness to openly discuss anything that has gone wrong, how to fix it, and how to avoid similar problems in future.

Share successes far and wide, too. It will help create a sense of achievement and pride, and the positivity will rub off – keeping people engaged, raising self-esteem, and ensuring that individuals feel valued. Don’t restrict this to the immediate team; let everyone see what’s going well so they can see progress and join in with celebrating even the smallest wins.

3. Make team-building an ongoing priority

Team building isn’t – nor should it be – a one-off activity. Team building should be a continuous activity through which connections and commitment can deepen, and flexibility is allowed for. Regular in-person meetings, where possible, will help with this but again make the most of video meetings where these are the next best option.

Embrace any newcomers directly into the team and be proactive in ensuring that others are aware of them, care about them, and will include them fully – drawing out their strengths and allowing them to shine.

When we recently conducted a whole-team meeting at fme, we began with short introductory presentations by each team member, for the benefit of the new hires. Everyone prepared the following three slides:

    1. Tell us a bit about your career path before you joined fme
    2. Describe your current role and remit at fme, what you like about it, and what could be improved
    3. What are your personal interests/hobbies/where do you live and what else should we know about you?

These presentations really broke the ice; everyone was able to find some form of connection to other team members and the collaboration started off from a much more personal level.

4. Look for evidence of strong global team capability in your strategic partners

As the world becomes smaller, globally-managed projects will become increasingly common, and also critical to ensuring greater international coordination, consistency, and efficiency in the delivery of outcomes.

But, as global endeavors become the norm, it can be tempting to assume that all international capabilities are the same when engaging external consultants, program management companies, and technology service providers. This is a risky assumption, if it means they then judge and select partners based primarily on cost or KPIs. This is likely to result in corners being cut, undermining successful delivery.

The only safe way to gauge the global capability of a new provider is to look closely at their track record, either by taking up references from across their existing client base or by scrutinizing the case studies on their web site. It’s also a good idea to review the typical duration of those relationships.

fme is well known for its impressively high client retention rate, and for trusted partnerships which often go back 10-15 years, spanning multiple projects. Importantly, we take client feedback very seriously too – conducting surveys with clients as often as 2-4 times a year, to ensure project quality is consistently high, and to enable continuous improvement across our processes, our software, and our management consultancy practice. Please watch the video below if you are interested in some client voices on this topic.

Get in touch with us

 

Content consolidation: what do you really need from a unified DMS?

Content consolidation: what do you really need from a unified DMS?

It could be the accelerating pace of technology change, which has created a risk of systems being unsupported and left behind.Or perhaps it’s the external market and its growing demand for agility and deftness – which is hard to achieve when critical content is strewn across the global organization – that is fueling the desire for transformation.

The chances are, it’s a combination of all three scenarios that’s triggering the change now being considered.

Calculate what you have now – and where you want to get to

A successful DMS consolidation or content-based digital transformation project starts with understanding the primary business objectives and the strategic emphasis of the initiative.

To help scope the project, think carefully about – and/or seek professional help to determine – what you’re trying to achieve and why, and therefore who needs to be involved in any decision-making.

This process will also help with defining the project parameters, and in identifying and excluding data and content that doesn’t need to be moved across to the new, modern system or platform. In other words, it will help sift out the information and ‘paperwork’ that can be deleted, archived in a cheaper system, or left where it is. Trying to move everything across to the new set-up could waste valuable time and budget, without delivering any benefit – especially if that content has little in the way of metadata/smart tagging to aid future filing and rediscovery.

Categorize your content

Typically we group a company’s documentation into four main categories: operational documents (e.g. system-supporting documentation and project documentation); organizational documents (e.g. policies, procedures and SOPs); historical documents (which could span categories 1 and 2, documents that are no longer current and are being retained primarily for compliance reasons); and ‘unknown’ documentation (falling under any of the above categories, potentially the result of documents having been incorrectly stored or labelled, or inherited as part of a company acquisition).

Understanding the current format of all of this content will be useful too – what proportion is in paper format with wet signatures, for instance; and, where there are scanned PDFs stored in file-shares, are these viable as primary records?

Be ruthless in deciding what to transfer

As teams classify their content and establish their current state, they will begin to build a picture of documentation’s relative importance. This in turn will help inform the requirement of the new centralized system/unified platform, and – by extension – the preparation and migration work that will be involved in cleaning up and de-duplicating content; checking or adding metadata; and migrating everything to the new set-up.

By sorting documentation from across the organization into formal/less formal/informal content, and quantifying it, companies will start to gain clearer insight into the new system capacity they will need (both now and in the future), and how much time and budget to allow for the content verification, preparation and migration work.

Understanding the role and relative importance of each category of content will also help inform any automated treatment of information and documents in the new system – in keeping with data protection/records retention policy enforcement and tracking – across the lifecycle of each asset.

Setting expectations, identifying potential solutions

With a clear idea of the scope and scale of content migration requirement, as well as the long-term capacity and capabilities required of the new system, the process of going out to tender should be much more streamlined – because the business will have a good grasp of what a fit-for-purpose solution should look like.

But none of this will guarantee a perfect match. To achieve a streamlined single port of call and source of truth for company content, companies must also go in with realistic expectations and an understanding of what they may need to give up in return (such as obsolete legacy investments and bespoke, in-house systems).

In sacrificing and writing off older capabilities, companies will be in a better position to benefit from smarter integration; modern, agile project management options; and the opportunities to be inherently more ‘data driven’ and conformant with the latest industry regulations.

Awareness of, and provision for, emerging and longer-term requirements will be vital to securing a futureproof new setup, meanwhile. This includes ‘cloud readiness’ if companies still aren’t quite prepared to make that leap today with their new platform (a scenario which is much rarer now).

Last but not least, successful project delivery will depend on all relevant business stakeholders and subject-matter experts being included on the transformation journey from day one. As well as maximizing buy-in and acceptance of the transition, this will ensure that processes can be optimized. If, today, there are multiple incompatible processes for managing regulatory registrations, for example, the teams involved can discuss and work towards commonality, so that all are able to benefit fully from the new centralized content resource.

As ever, preparation is everything in ensuring successful project delivery and our experts are on hand to advise on any aspect of this critical scoping work as companies look to a more dynamic content management future.

Get in touch with us

 

 

The painful cost of overestimating data quality in migration

The painful cost of overestimating data quality in migration

Given the critical role of data as part of high-priority digital transformation programs (something Ian Crone blogged about recently), it’s surprising – alarming, even – how much is assumed about it as companies go into new technology projects. Until they know differently, IT project teams and their business leads tend to operate on the premise that the chosen new Regulatory Information Management (RIM) system will transform the way they work and deliver the desired outcomes. The corporate data it draws on is more an afterthought. No one doubts its quality and complexity – until it’s too late.

It’s this risky assumption that must be challenged, something that should happen much earlier along the project timeline – long before any system/platform implementation has been set in motion, and before any deadlines have been agreed. The inherent risk, otherwise, is that the implementation project will have to stop part way in – when it becomes obvious that the data sets involved are not all they could or should be.

Identifying discrepancies

Often, it’s the smallest discrepancies that can cause the biggest hiccups in data consistency. Simple inconsistencies, such as using different abbreviations for the same compound, or misspelling a product name, can result in duplicates appearing in a system. More complex issues occur when the project is linked to IDMP preparations, whether as a primary focus or as a secondary benefit, there may be fields yet to be completed or which require content from other sources. Multiply this up by potentially millions of data points and you see the risk.

It could be that multiple different RIM systems are being consolidated into one new one. As each different system is trawled for its constituent information, consideration needs to be given to differing formatting, data duplication and variability in data quality. A myriad of dependencies and checkpoints, between systems to ensure the success of the content migration project.

Inevitably there will be interdependencies between the data in different systems too, and links between content (between source RIM data and stored market authorisation documents, for instance). All of this needs to be assessed, so that project teams understand the work that will be involved in consolidating, cleaning and enriching all of the data before it can be transferred to and rolled up in any new system.

The sobering costs of project recalibration

If a system implementation is already underway when data issues are identified, project teams must recalculate and recalibrate, which can incur significant cost and effort. Before they know it, a project that was scheduled to take a year needs an additional three months to clean up and enhance the data.

Processing change requests will require key resources that are now committed elsewhere – not to mention additional budget that hasn’t been provided for (a 25%+ hike in costs is not unusual, as data quality issues are identified). Meanwhile, there are Users waiting for capabilities that now aren’t going to materialize as expected: delays are inevitable. Data is critical to the business and without the right quality data, a new system cannot go live.

All of this could be avoided if the data migration implications of a project were analysed, assessed, understood and scoped fully ahead of time. The good news is that this oversight is relatively easy to rectify – for future initiatives at least. It’s just a case of calling in the right experts sufficiently early in the transformation definition and planning a process to perform appropriate analyses.

More on our Content Migration Assessment

Get in touch with us

 

 

Give data migration its own RFP

Give data migration its own RFP

In this context, and to improve their own agility and responsiveness, Research and Development (R&D) organizations have now set out two areas of renewed interest for strategic focus and investment. One is biobanking, as the demands around clinical trial sample storage soar. The other is the need to strengthen data assets, as the ability to apply these confidently and swiftly across all kinds of regulatory processes becomes crucial to speed to market.

The data imperative – which surrounds the integrity, quality and traceability of regulated product data – concerns all pharma organizations. The driver for change could be a regulatory information management (RIM) consolidation project following a merger; it might be an initiative to standardize on IDMP fields and vocabularies; or an attempt to bring new traceability to medical device or cosmetics manufacture. But, too often, companies set the wheels in motion and start to implement a new business solution, before they consider the work that might be necessary to vet and prepare data so that it can be migrated reliably into the target IT system and/or new data model.

Choosing the new system first could be putting the cart before the horse

In some cases, project owners assume that any matters relating to data assessment, preparation and migration will be taken care of by the new system vendor, and that this would be addressed as part of their proposal. It’s only when the analysis phase of the project begins – that the realization dawns that the incoming data is messy/conflicting/incomplete – that they begin to understand that they have underestimated, and skimped on, this critical cornerstone of a successful delivery.

Instead of expecting software vendors (which typically lack the depth of data experience) to provide for data assessment, cleaning and enhancement as vital preparation work ahead of any data migration, the only real way to make a proper job of it is to itemize it separately – in other words, break out this work with a separate request for information (RFI), or request for proposal (RFP).

Avoiding the tough decision of compromise vs bill shock

By separating out data-specific activity, companies will also save themselves from any bill shocks as vendors are forced to bring in specialist partners to rescue a project – at short notice, and with their own mark-up on the extra costs. If the parameters are known much earlier on in the project cycle, the data preparation and migration work can be more accurately planned for and integrated more seamlessly into the overall deployment – with much less risk of the project over running or exceeding its budget.

It’s one thing to prioritize cost when sizing up vendors for a new system project, but if this introduces new risk, because the required specialist skills and resources have not been allowed for, it is a false economy. Certainly, the work could end up costing a lot more and taking a lot longer if critical data preparations turn into last-minute firefighting. Far better to have the right skills cued up from the outset, with a clear remit which includes responsibility for servers, security and more during the data preparation and migration phase.

In 2022, a whole range of digital transformation drivers including IDMP compliance preparations and improved traceability will see new system implementation projects and associated data migration initiatives increases. To maximize success, it’s definitively advisable to separate out your data requirement and prioritize this from the outset so that any system project builds on solid foundations.

More on our Life Sciences Portfolio

Get in touch with us

 

 

Validation Insights for an Off-the-Shelf Quality Management System

Validation Insights for an Off-the-Shelf Quality Management System

Don’t underestimate the time to validate a new software system

Normally you need a validation expert in your team to guide you through the process, which is a project on its own, in parallel to the actual technical implementation. It is nearly impossible to revalidate an existing system, so it is essential that you do it correctly in the first place. The importance and criticality of the validation adds a lot of time and requires additional budget for the project. The time to validate a new software system can be wildly underestimated and experience/training is necessary to know how to do it properly so an auditor will not find gaps in your documentation.

According to GAMP 5, there are four different application categories of software products related to good manufacturing practices and based on these categories there are different validation deliverables to be created, meaning the effort for the initial validation can vary a lot from one software component to another.

GAMP 5 Categories Overview

The table above identifies Category 1 and Category 3 in a very clear manner. Category 4 and Category 5 can be a bit trickier to define, depending on the software to be implemented. To be in accordance with GAMP 5 it is important to understand the difference between Category 4 – Configuration and Category 5 – Customization.

  • Configuration is the modification of a standard functionality of a software product to meet the business process or user requirements, for example including defined text strings for drop-down menus, turning software functions on or off, graphical dragging and dropping of information elements, and creation of specific reports.
  • Customization means writing code, scripts or procedures to meet business requirements by external programming language (such as C++ or .NET or SQL).

Custom applications developed to meet the specific need of regulated companies

A custom Application (Category 5), while being most flexible in fulfilling business user requirements, will require the most effort to validate as everything must be documented. The best way to do this is to follow the V-Model and keep an end-to-end mapping throughout all documents authored in the different project phases. The most efficient approach is to create the technical documentation at the end of the development phase because every change in the development scope will also lead to a change in the documentation. (Of course, this is not an insider secret, we all know how often scope is changed.) Every change to the system just takes a few seconds on technical side, but it can take days to update the changes to the documentation and route for and receive approvals.

Anyone who has gone through the process of implementing a new QM system and had to determine the correct category at the beginning, has likely realized that this is not as easy as it sounds. A misstep early in the classification of the system can lead to validation documents and efforts being greater than expected in the end. First, defining the deliverables to validate the system, including a Validation Master Plan, a User Requirements Specification as well as System and User Test Scripts is straightforward and there are even many resources online to guide you on the documents that are needed. Next, add some additional time required for all of these documents to be written, reviewed, approved, as well as for the formal execution of test scripts with post-execution approval. On top of these efforts there will likely be some issues identified and every resulting deviation will need to be tracked, evaluated, and resolved, or accepted as a known error. What started out as a few documents has ballooned into a robust documentation package that will easily add several weeks to your planned timeline.

Configured Products that meet user specific needs

On the other hand, you may be implementing an “off the shelf” or pre-configured QM system (Category 4), for which all the technical part of the validation was already done by the vendor and only the changes and customizations need to be validated. Here the business users and Quality organization will provide guidelines to determine whether the software supplier documentation can be accepted as-is or if there are additional vendor qualifications or retesting required to satisfy the business expectations. The testing can turn into a bit of the grey area, what is really OOTB and already covered versus what will require detailed testing, what will be sufficient to fulfill validation but also business needs. Fairly often a risk-based testing approach is considered, by classifying requirements as high, medium and low risk to the business process and then aligning the testing according to this assessment. This can help save a lot of additional time.

There is a second option to the Category 4 “off the shelf” solution that layers in the software as a service (SaaS) model. With this approach, imagine implementing a fully validated QM system as a cloud solution, fully pre-configured and with all validation documents already provided, and updated by the vendor with each subsequent upgrade that is pushed out to your hosted environment. You just cut your timeline in half – from a validation perspective this sounds great but from a user perspective it might not.

Users will be wondering: Will this pre-configured QM system behave as we need it for our existing processes? How much can we still adapt it to our needs and if so, what does this mean for the validated state of the QM system? When implementing a SaaS QM system, the “revolutionary” idea is that you treat each update as a change. Per definition a change has limited documentation requirements which means you don’t need a validation specialist – you will spend less time creating documents, executing test cases, and in the end, you will save a lot time and budget. Just one item to highlight for completeness — the periodic vendor upgrades (per vendor schedule) will need to be evaluated whether there are any impacts to the changes that have been brought in place before, to determine if any regression testing and validation documentation updates will be required. This common scenario is another reason to keep changes to the application as limited as acceptable by the business.

Summary

So, in summary the fully pre-configured and validated system will save time and budget because you will receive all of the authored, approved and executed documents from the vendor. There is nothing left that you need to take care of for the base implementation. Any of the application adjustments which need to be done to meet business requirements will be documented as changes. All you need to do is to take the existing technical documentation, capture the changes and test only those, and potentially revisit them during vendor upgrades. It may sound too good to be true, but it is not — we have implemented QMS solutions with this approach and the results speak for themselves.

Overall, yes, validation is scary. But when approached in a smart way it does not have to turn into a huge undertaking. Pre-configured software solutions have advantages and should be favored compared to full-blown custom solutions. If they are in a SaaS model and the business is fine with that — the solution will appear even better from a validation standpoint.

Please reach out to us with any questions around this topic. With many years of experience in Life Sciences projects our team lives and breathes the validation methodologies and can help you to make sure that your project will be properly validated, in a smart and effective way, and will withstand any audits — using your own validation methodologies or bringing in templates from our previous experiences.

> Get in touch with us

 

fme Life Sciences is a leading provider of business and technology services supporting the deployment of Content Services and ECM solutions to its clients in the Life Sciences Industry. We act as a trusted advisor and systems integration specialist across the Clinical, Regulatory and Quality and Manufacturing domains in Europe and North America. We do not exclusively recommend or promote any platform or vendor, but rather we focus on providing our clients with an independent perspective of the solutions available in the market.

“Data Discernment – A simplified approach to strategic study and portfolio planning.”

“Data Discernment – A simplified approach to strategic study and portfolio planning.”

Now, with the rapid innovation in eClinical system technology, a Sponsor can more readily find a CTMS solution that supports exactly their business profile – enhancing strategic insights without unnecessarily encumbering the clinical operations process.

Most small-to-midsize clients focus on two key areas when using CTMS to enhance data accessibility

1. Enrollment Metrics
Key Areas of Concern – patient recruitment and screen failure rate, open to enrollment to first patient enrolled, patients enrolled and not completing all visits (breakdown analytics for interim visits, safety events, and/or protocol deviations), target enrollment, accrual timeline.

2. Site Performance
Key Areas of Concern – study start-up timelines/spend, protocol compliance and deviations, safety reporting, data entry timeliness and accuracy, clinical document quality.

These two areas provide the greatest/earliest indication of protocol success or on-going risk, as well as return on study investment. And, so the question becomes, should the Sponsor bring the necessary data points in-house using CTMS as a nexus for cross-functional collaboration? Great question.

Use skills, experience, and resources of your CRO to your greatest advantage!

Essentially, CROs have a deeper data lake and access to more robust, well-rounded data points. It’s a volume game, pure and simple. Unless your organization has very specific goals in mind, it likely isn’t worth the cost and resourcesto duplicate the data collection efforts, particularly if the CRO has been contracted to perform a majority of the study activities.
Another important consideration is application maintenance. When the CTMS application is cloud-based and subject to periodic release updates – like Veeva Vault Clinical – any integration must be tested and maintained to ensure integrity of both the connection and the data communicated thereby. This can be a big resourcing effort, considering the 3x Annual Veeva Vault Release schedule.

Get specific and targeted with meaningful KPIs

When Sponsor goals dictate that it is time to bring data in-house (worth the implementation and maintenance efforts), be highly targeted. Choose specific, meaningful Sponsor-priority KPIs to capture in the CTMS environment, then leverage Vault application features to boost efficency in on-going management activities. Resist the urge to capture data simply because there is a visible field available or an out-of-the-box report associated with it; if you don’t need it, hide it.

Recap

In this blog series, we discussed the importance of a simplified eClinical system environment, then juxtaposed compliance „have-to-dos“ with strategic „want-to-dos“ using a simple framework, and voila – a hybrid governance:maturity map. Using this map, you’re ready to drive innovation both internally and within the industry. And, if you need some extra help, just ask – fme is here to support you!