Digital transformation in life sciences: ensuring data is fit for future purpose

Digital transformation in life sciences: ensuring data is fit for future purpose

In life sciences, such considerations are often underestimated. Organizations are so ready to untether themselves from the complexity and constraints of old legacy systems that they can become distracted from other factors which need to be considered to get the most out of their new investment.

Priority goals may include transforming the way they manage and work with regulatory information management, to drive greater efficiency, accuracy, visibility as well as compliance with the evolving demands of regulators. But the scope of even the most dynamic new platform or system will be dependent on, and limited by, the business data available. If that data has material gaps in it, contains significant duplication and/or errors, or is not aligned with the fields and formats required for target/future use cases in conjunction with the data governance strategy, even the best-planned project will not deliver effectively.

Start by considering what’s possible

As life sciences organizations form their digital transformation strategies and reset their goals, it’s important that they understand the potential opportunity to streamline and improve associated processes – and the way that these new or reframed processes will harness data to deliver step changes in execution and output.

One opportunity, for example, could be to transform the way companies manage their product portfolios – via a more dynamic, finer-grained definition of that portfolio and an end-to-end view of its change management, registration/licensing and commercial status in every market globally.

Another is to harness regulatory information (RIM data) to streamline the way a whole host of other functions plan and operate. There’s a lot of interest now in flowing this core data more fluidly into processes beyond Regulatory Affairs – such as Clinical, Manufacturing, Quality, Safety, and Pharmacovigilance. Rather than each function deploying and managing its own applications and data set to serve a single purpose, as has been largely the case up to now, the growing trend is to take a cross-functional platform approach to data, change, and knowledge management. This means that each team can draw on the same definitive and live information set to fulfil their business need.

All of this is much more efficient, as well as less error prone – because similar or overlapping data is not being input many different times, in slightly different ways. This, in turn, will expose companies to much lower risk as regulators like EMA start to require simultaneous data-and-document based submissions for marketing authorizations and variations/updates, which inevitably will see them implement formal cross-checks to ensure information is properly synchronized and consistent.

There are no shortcuts to rich, reliable data

The process transformation opportunities linked to all the above are considerable, and they are exciting. However, they rely on the respective teams understanding and harnessing that potential through advanced, proactive planning. By agreeing, collectively, on the scope for greater efficiency, and on the strategic advantages that are made possible through access to more holistic intelligence and insights, teams can start to move together toward a plan that will benefit everyone.

Practically, this will require an investment of time and thought, considering the state and location of current data, and what will need to happen to it to ensure that it is of sufficient quality, completeness and multi-purpose reusability to support improved processes in the future state. Unquestionably, this will also require a considerable amount of targeted work to ensure existing data is aligned and of high quality; that it uses agreed vocabularies; and consistently adheres to standardized/regulated formatting, data governance, and naming conventions.

Source expert help as needed

All of this may sound like a lot of “heavy lifting”, but it is exactly the kind of activity our experts can advise on. We can start by helping life sciences companies put together a strategy based on how data will ideally be used in the future state, and what needs to happen to it to prepare it for migration.

Working alongside the various business subject-matter experts (e. g. the people closest to the product portfolios and the processes involved in managing these), we’ll help scope the work involved and the resources that will be required. We can also help to determine the historical, current, and future role of respective data, so that only active data is prioritized for preparation for migration to the new system or platform (in terms of refactoring/clean-up/enrichment).

Forewarned is forearmed, as they say. Although preparing data so that it’s migration-ready may sound like an onerous undertaking, it is far better to know this and be able to do something about it ahead of time, than to be caught out once a critical technology project is already well advanced – by which time fundamental data transformation considerations may be too late.

To sound out our experts about data preparations needed for an upcoming new systems project, please get in touch, I’m happy to support you.

Get in touch with us

About the Author

After graduating from SUNY at Delhi in 1989 with an Associate’s Degree in Electrical Technologies, working for GEC Alstom servicing clients in Power Generation, Frank D’Entrone graduated from Northeastern University in Boston Massachusetts in 1999 and accepted a position for a Biotech company in Boston entering into the Enterprise Content Management (ECM) space. Mr. D’Entrone then went on to work for IT services organizations providing ECM consulting services to clients in industries such as Life Sciences, Manufacturing, and Finance. In 2007 Frank D’Entrone founded DT Technologies Inc. continuing to provide ECM consulting services till entering into a joint venture with fme AG in March of 2010. His role at fme US, LLC is President and Head of Business Development.

fme Life Sciences is a leading provider of business and technology services supporting the deployment of Content Services and ECM solutions to its clients in the Life Sciences Industry. We act as a trusted advisor and systems integration specialist across the Clinical, Regulatory and Quality and Manufacturing domains in Europe and North America. We do not exclusively recommend or promote any platform or vendor, but rather we focus on providing our clients with an independent perspective of the solutions available in the market.

Don’t be sidetracked by EMA’s DADI curveball: data is still the goal

Don’t be sidetracked by EMA’s DADI curveball: data is still the goal

Yes, the initial plans have altered to buy everyone a bit more time, but the broader plan is still on track – to make product data submissions at least as important as electronic documents (and ultimately the priority/default) in all communications with the regional Regulator.

In other words, whatever tweaks companies and software vendors make to their roadmap and immediate capabilities over the next few months, these should not detract from – nor dilute – the overarching plan to get regulated product data in complete, consistent and up-to-date order.

That’s so that when it is time to migrate to – and go live – with an optimized new regulatory information/submissions management platform or system, there is comprehensive, compliant, high-quality data ready to run across it.

And of course, the foundational work done now will stand companies in good stead for when other regions across the world implement their own take on IDMP – given that dynamic data exchange is where all of this will go, globally, in due course.

What’s changed, and how does it affect you?

To recap what’s changed in the interim, EMA’s Digital Application Dataset Integration (DADI) interface – a project that has been evolving alongside IDMP and SPOR (see https://esubmission.ema.europa.eu/ for more information) – will in the short term, from April 2023, serve as the means for entering electronic application forms through interactive web forms.

This will enable companies to pull data from the PMS (that has been migrated from XEVMPD and other EMA databases) and to get out the human-readable PDF format and machine-readable XML format. Both formats will be submitted along with the eCTD; this part of the process will not change.

This latest development is an important step toward the IDMP goal of reusing data from PMS, and the first step toward the IDMP standardization of data. EMA will support this approach for variation forms only at this point, extending it to initial applications and renewals later.

Ultimately, EMA’s plan is for standardized medicinal product data currently held in the xEVMPD database to be enriched for the PMS, where fuller, more granular, IDMP-compliant medicinal product detail will be kept and updated over time.

There are some practical challenges still to be worked out, such as how IDMP detail that is currently missing will be added to PMS, and how internal company RIM systems and EMA’s PMS database will get to a point of being able to exchange data more seamlessly without requiring manual data re-entry. But for now, this is a chance for companies to update and correct their data with EMA’s dictionary through the familiar XEVMPD process. (Ultimately, FHIR – the global industry standard for passing healthcare data between systems – will support more dynamic data exchange/sync’ing between companies’ RIM systems and EMA’s PMS.)

Rather than play for time, here are 4 opportunities that the interim DADI move makes possible, as well as 5 next steps that companies should take to stay on track with their data preparations:

4 benefits to exploit

  1. The use of the DADI interface for getting data from the EMA PMS allows life sciences companies and software providers to take a breath as they prepare for full-scale IDMP implementation and compliance. FHIR-based submissions via API have been pushed back for now (this will still happen, just not within the next year).
  2. The industry is now less dependent on immediate technology changes. There is no need for their RIM systems to support DADI, as at this point data won’t flow directly between RIM records and EMA’s PMS.
  3. The EMA’s roadmap allows for implementation to happen in manageable chunks. EMA’s ‘DADI first’ approach allows for Product (PMS) data re-use, and accounts for the largest proportion of regulatory submissions.
  4. This is a chance to reset or adapt IDMP/regulatory data strategies, catch up, and prepare to deliver maximum benefits and efficiencies from the preparations (e. g. by doing sufficient groundwork to enable a confident system migration, when the time comes).

5 things to do next, for pharma companies

  1. Set or re-set your strategy and position around regulatory, structured data.
  2. Collect and assess product data and prepare this for compliance (scoping and getting stuck into any data enrichment now) – so that it addresses the granularity of IDMP requirements and maps to EMA’s dictionaries/vocabularies.
  3. Prepare to support xEVMPD e-submissions based on the new data model and all of the levels of detail that are expected, to be ready for the future and to enable a rapid transition to IDMP.
  4. Improve your ability to respond and adapt quickly to further changes to regulatory requirements. EMA’s switch to using DADI to submit data to the PMS highlights just how swiftly the roadmap can change, and why an Agile approach to project management is so important.
  5. Start to migrate your content into the new target system as soon as possible. If you have started with the collection of data in Excel files locally, this data could become outdated if not maintained. Don’t leave thoughts of migration until the last minute. Plan for this now, as part of your overall scoping work.To maximize your IDMP system migration, or discuss your best route to IDMP data preparation as you plan for this, please fill out the contact form below and we’ll put you in touch with our experts.
    Get in touch with us

About the Author

Karmen Umek Luzar followed her Bachelor of Organisational Sciences in IT with a Master of Science in Information Management at the School of Economics and Business at the University of Ljubljana in 2005. Karmen started her professional career in Life Sciences in 2004 as a Consultant for INFOTEHNA/Amplexor Adriatic. In the course of her career at Amplexor she has managed interdisciplinary, or agile teams of 15 people, is highly skilled in writing state-of-the-art technical documentation and has a vast knowledge of software validation. Karmen has 20 years of experience in developing and implementing IT solutions in the Life Sciences industry, focused on RIM, Submission, XEVMPD and IDMP. She joined the Life Sciences Team at fme AG as a Senior Consultant in March 2022.

fme Life Sciences is a leading provider of business and technology services supporting the deployment of Content Services and ECM solutions to its clients in the Life Sciences Industry. We act as a trusted advisor and systems integration specialist across the Clinical, Regulatory and Quality and Manufacturing domains in Europe and North America. We do not exclusively recommend or promote any platform or vendor, but rather we focus on providing our clients with an independent perspective of the solutions available in the market.

Beware the inflated promises of AI in accelerating data migration

Beware the inflated promises of AI in accelerating data migration

In many cases, this means updating or establishing new systems and migrating huge volumes of content across to the new environment – and supplementing or enriching this data in the process so that it better meets ongoing needs and to be aligned with IDMP controlled value lists. Effective migration is likely to involve locating and transferring information from hundreds of thousands of content files currently residing in rudimentary file shares, where a lot of potentially valuable data currently exists in unstructured form within single-use documents.

Given the scale of the task before them, and the scarcity of spare capacity to oversee the work manually, it is easy to appreciate why Regulatory teams and supporting IT departments might look to artificial intelligence (AI) as a means of expediting the data extraction and enrichment process, as companies look to convert unstructured information into searchable and re-usable data assets in the new target system.

Managing expectations

Certainly, AI specialist tool and service providers have made some pretty lofty promises about the technology’s potential, accuracy, and scope. With training, they say, machine learning solutions can hit 95 per cent accuracy in finding, identifying, tagging and lifting the information that is needed from commonly-used documents and other unstructured content sources. To an overstretched RA team drowning in an ocean of material, spanning metaphorical warehouses and continents in its product coverage, this promise of reliable task automation is undeniably appealing.

BUT — and there is a huge caveat here – 95 per cent accuracy, even if attainable, is still too risky for validated use cases, such as regulatory submissions preparation and management. The trouble with AI algorithm performance monitoring is that it is all statistics and trends based: details of where it is doing well or less well are much vaguer. In other words, while 95 per cent overall accuracy might sound impressive, the margin of error remains all too great if no one can be quite sure where any gaps or errors are arising. And if humans have to go through everything to check, any time and labor saving to this point will have been for nothing.

Don’t despair: it’s not a case of all or nothing

This needn’t be cause for outright disillusionment, though. For one thing, there are rules-based processes that provide more predictability than AI, which can be used instead to assume much of the legwork while retaining the assurance of human quality control.

Meanwhile, AI tools and techniques can play a useful part in non-validated content management – for example, for enriching/adding metadata to archived content which is no longer used in live submissions, but which has to be retained (e.g. for anything from 10 to 25 years) for compliance reasons. Here, smart automation offers a way to breathe new life and value into legacy records, rendering them more immediately searchable and useful. If, as part of an AI-driven data enrichment/meta-tagging exercise, 5% of the content is missed or indexed incorrectly, someone can perform a manual search or manual checks without any risk to submissions performance, marketing authorization status, or patient safety.

As ever, it’s a case of horses for courses, and for now AI promises more than it can deliver for validated regulatory content migration purposes. But that doesn’t mean there isn’t an alternative to sheer manual graft, and you can count on fme to harness the most effective tools and processes for each project.

Get in touch with us

About the Author

After graduating from TU Darmstadt University (Germany) in 1999 with a Master of Science degree in Electrical Engineering, Markus started his career in the world of Enterprise Content Management as Software Developer for Documentum applications. He has been working as a consultant for Documentum applications in the pharmaceutical industry since 2003. Markus joined fme AG in 2006, where he initially was active in many areas of the ECM world. Since 2013, he has specialized in the area of data and content migration. Markus speaks English fluently, German is his mother tongue. His role is Principal Consultant for Migration Services within the Life Sciences business unit at fme AG. Markus’ daily business is the implementation and execution of Migration Services in validated environments. Markus is also an experienced validation expert for Migration Services and possesses Life Sciences process expertise.

fme Life Sciences is a leading provider of business and technology services supporting the deployment of Content Services and ECM solutions to its clients in the Life Sciences Industry. We act as a trusted advisor and systems integration specialist across the Clinical, Regulatory and Quality and Manufacturing domains in Europe and North America. We do not exclusively recommend or promote any platform or vendor, but rather we focus on providing our clients with an independent perspective of the solutions available in the market.

Content consolidation: what do you really need from a unified DMS?

Content consolidation: what do you really need from a unified DMS?

It could be the accelerating pace of technology change, which has created a risk of systems being unsupported and left behind.Or perhaps it’s the external market and its growing demand for agility and deftness – which is hard to achieve when critical content is strewn across the global organization – that is fueling the desire for transformation.

The chances are, it’s a combination of all three scenarios that’s triggering the change now being considered.

Calculate what you have now – and where you want to get to

A successful DMS consolidation or content-based digital transformation project starts with understanding the primary business objectives and the strategic emphasis of the initiative.

To help scope the project, think carefully about – and/or seek professional help to determine – what you’re trying to achieve and why, and therefore who needs to be involved in any decision-making.

This process will also help with defining the project parameters, and in identifying and excluding data and content that doesn’t need to be moved across to the new, modern system or platform. In other words, it will help sift out the information and ‘paperwork’ that can be deleted, archived in a cheaper system, or left where it is. Trying to move everything across to the new set-up could waste valuable time and budget, without delivering any benefit – especially if that content has little in the way of metadata/smart tagging to aid future filing and rediscovery.

Categorize your content

Typically we group a company’s documentation into four main categories: operational documents (e.g. system-supporting documentation and project documentation); organizational documents (e.g. policies, procedures and SOPs); historical documents (which could span categories 1 and 2, documents that are no longer current and are being retained primarily for compliance reasons); and ‘unknown’ documentation (falling under any of the above categories, potentially the result of documents having been incorrectly stored or labelled, or inherited as part of a company acquisition).

Understanding the current format of all of this content will be useful too – what proportion is in paper format with wet signatures, for instance; and, where there are scanned PDFs stored in file-shares, are these viable as primary records?

Be ruthless in deciding what to transfer

As teams classify their content and establish their current state, they will begin to build a picture of documentation’s relative importance. This in turn will help inform the requirement of the new centralized system/unified platform, and – by extension – the preparation and migration work that will be involved in cleaning up and de-duplicating content; checking or adding metadata; and migrating everything to the new set-up.

By sorting documentation from across the organization into formal/less formal/informal content, and quantifying it, companies will start to gain clearer insight into the new system capacity they will need (both now and in the future), and how much time and budget to allow for the content verification, preparation and migration work.

Understanding the role and relative importance of each category of content will also help inform any automated treatment of information and documents in the new system – in keeping with data protection/records retention policy enforcement and tracking – across the lifecycle of each asset.

Setting expectations, identifying potential solutions

With a clear idea of the scope and scale of content migration requirement, as well as the long-term capacity and capabilities required of the new system, the process of going out to tender should be much more streamlined – because the business will have a good grasp of what a fit-for-purpose solution should look like.

But none of this will guarantee a perfect match. To achieve a streamlined single port of call and source of truth for company content, companies must also go in with realistic expectations and an understanding of what they may need to give up in return (such as obsolete legacy investments and bespoke, in-house systems).

In sacrificing and writing off older capabilities, companies will be in a better position to benefit from smarter integration; modern, agile project management options; and the opportunities to be inherently more ‘data driven’ and conformant with the latest industry regulations.

Awareness of, and provision for, emerging and longer-term requirements will be vital to securing a futureproof new setup, meanwhile. This includes ‘cloud readiness’ if companies still aren’t quite prepared to make that leap today with their new platform (a scenario which is much rarer now).

Last but not least, successful project delivery will depend on all relevant business stakeholders and subject-matter experts being included on the transformation journey from day one. As well as maximizing buy-in and acceptance of the transition, this will ensure that processes can be optimized. If, today, there are multiple incompatible processes for managing regulatory registrations, for example, the teams involved can discuss and work towards commonality, so that all are able to benefit fully from the new centralized content resource.

As ever, preparation is everything in ensuring successful project delivery and our experts are on hand to advise on any aspect of this critical scoping work as companies look to a more dynamic content management future.

Get in touch with us

About the Author

Ian Portman joined fme as a Managing Consultant in January 2022 with over 20 years’ experience in Information and Records Management, Technical Authoring and Quality. Following a career in the Armed Forces (Army), Ian worked in Media, Manufacturing, Technology, Utilities, Government and Commercial industries. Ian’s last role was with Seqirus where he was the Business Migration Lead responsible for the migration of Regulatory data from Share Point and Documentum to Veeva. While at GSK Ian was a Charities Ambassador working with adults with learning difficulties and Help for Heroes, providing individuals with work experience and helping them find work placements.

 

 

fme Life Sciences is a leading provider of business and technology services supporting the deployment of Content Services and ECM solutions to its clients in the Life Sciences Industry. We act as a trusted advisor and systems integration specialist across the Clinical, Regulatory and Quality and Manufacturing domains in Europe and North America. We do not exclusively recommend or promote any platform or vendor, but rather we focus on providing our clients with an independent perspective of the solutions available in the market.

The painful cost of overestimating data quality in migration

The painful cost of overestimating data quality in migration

Given the critical role of data as part of high-priority digital transformation programs (something Ian Crone blogged about recently), it’s surprising – alarming, even – how much is assumed about it as companies go into new technology projects. Until they know differently, IT project teams and their business leads tend to operate on the premise that the chosen new Regulatory Information Management (RIM) system will transform the way they work and deliver the desired outcomes. The corporate data it draws on is more an afterthought. No one doubts its quality and complexity – until it’s too late.

It’s this risky assumption that must be challenged, something that should happen much earlier along the project timeline – long before any system/platform implementation has been set in motion, and before any deadlines have been agreed. The inherent risk, otherwise, is that the implementation project will have to stop part way in – when it becomes obvious that the data sets involved are not all they could or should be.

Identifying discrepancies

Often, it’s the smallest discrepancies that can cause the biggest hiccups in data consistency. Simple inconsistencies, such as using different abbreviations for the same compound, or misspelling a product name, can result in duplicates appearing in a system. More complex issues occur when the project is linked to IDMP preparations, whether as a primary focus or as a secondary benefit, there may be fields yet to be completed or which require content from other sources. Multiply this up by potentially millions of data points and you see the risk.

It could be that multiple different RIM systems are being consolidated into one new one. As each different system is trawled for its constituent information, consideration needs to be given to differing formatting, data duplication and variability in data quality. A myriad of dependencies and checkpoints, between systems to ensure the success of the content migration project.

Inevitably there will be interdependencies between the data in different systems too, and links between content (between source RIM data and stored market authorisation documents, for instance). All of this needs to be assessed, so that project teams understand the work that will be involved in consolidating, cleaning and enriching all of the data before it can be transferred to and rolled up in any new system.

The sobering costs of project recalibration

If a system implementation is already underway when data issues are identified, project teams must recalculate and recalibrate, which can incur significant cost and effort. Before they know it, a project that was scheduled to take a year needs an additional three months to clean up and enhance the data.

Processing change requests will require key resources that are now committed elsewhere – not to mention additional budget that hasn’t been provided for (a 25%+ hike in costs is not unusual, as data quality issues are identified). Meanwhile, there are Users waiting for capabilities that now aren’t going to materialize as expected: delays are inevitable. Data is critical to the business and without the right quality data, a new system cannot go live.

All of this could be avoided if the data migration implications of a project were analysed, assessed, understood and scoped fully ahead of time. The good news is that this oversight is relatively easy to rectify – for future initiatives at least. It’s just a case of calling in the right experts sufficiently early in the transformation definition and planning a process to perform appropriate analyses.

More on our Content Migration Assessment

Get in touch with us

About the Author

Peter Reynolds brings 20 years of global Life Science/ Pharmaceutical vertical experience, understanding needs, requirements and process to deliver business value through mission critical Enterprise level Software & Services projects, both on-premise and SaaS. After an early career with Content/Document management projects, often using Documentum, within the financial services sector, the focus became about Life Science specific knowledge across the Regulatory, Clinical, Safety and Medical Information domains. He has worked with small ‘virtual’ Biotech’s right up through to the Top 10 Global Pharmaceutical companies and looks to bring this knowledge to fme’s European clients.

 

 

fme Life Sciences is a leading provider of business and technology services supporting the deployment of Content Services and ECM solutions to its clients in the Life Sciences Industry. We act as a trusted advisor and systems integration specialist across the Clinical, Regulatory and Quality and Manufacturing domains in Europe and North America. We do not exclusively recommend or promote any platform or vendor, but rather we focus on providing our clients with an independent perspective of the solutions available in the market.

Give data migration its own RFP

Give data migration its own RFP

In this context, and to improve their own agility and responsiveness, Research and Development (R&D) organizations have now set out two areas of renewed interest for strategic focus and investment. One is biobanking, as the demands around clinical trial sample storage soar. The other is the need to strengthen data assets, as the ability to apply these confidently and swiftly across all kinds of regulatory processes becomes crucial to speed to market.

The data imperative – which surrounds the integrity, quality and traceability of regulated product data – concerns all pharma organizations. The driver for change could be a regulatory information management (RIM) consolidation project following a merger; it might be an initiative to standardize on IDMP fields and vocabularies; or an attempt to bring new traceability to medical device or cosmetics manufacture. But, too often, companies set the wheels in motion and start to implement a new business solution, before they consider the work that might be necessary to vet and prepare data so that it can be migrated reliably into the target IT system and/or new data model.

Choosing the new system first could be putting the cart before the horse

In some cases, project owners assume that any matters relating to data assessment, preparation and migration will be taken care of by the new system vendor, and that this would be addressed as part of their proposal. It’s only when the analysis phase of the project begins – that the realization dawns that the incoming data is messy/conflicting/incomplete – that they begin to understand that they have underestimated, and skimped on, this critical cornerstone of a successful delivery.

Instead of expecting software vendors (which typically lack the depth of data experience) to provide for data assessment, cleaning and enhancement as vital preparation work ahead of any data migration, the only real way to make a proper job of it is to itemize it separately – in other words, break out this work with a separate request for information (RFI), or request for proposal (RFP).

Avoiding the tough decision of compromise vs bill shock

By separating out data-specific activity, companies will also save themselves from any bill shocks as vendors are forced to bring in specialist partners to rescue a project – at short notice, and with their own mark-up on the extra costs. If the parameters are known much earlier on in the project cycle, the data preparation and migration work can be more accurately planned for and integrated more seamlessly into the overall deployment – with much less risk of the project over running or exceeding its budget.

It’s one thing to prioritize cost when sizing up vendors for a new system project, but if this introduces new risk, because the required specialist skills and resources have not been allowed for, it is a false economy. Certainly, the work could end up costing a lot more and taking a lot longer if critical data preparations turn into last-minute firefighting. Far better to have the right skills cued up from the outset, with a clear remit which includes responsibility for servers, security and more during the data preparation and migration phase.

In 2022, a whole range of digital transformation drivers including IDMP compliance preparations and improved traceability will see new system implementation projects and associated data migration initiatives increases. To maximize success, it’s definitively advisable to separate out your data requirement and prioritize this from the outset so that any system project builds on solid foundations.

More on our Life Sciences Portfolio

Get in touch with us

About the Author

Ian Crone became the Business Unit Director for the fme Life Sciences Team in Europe in October 2021. He has a background in Chemical Engineering, which is the subject of both his degree at Strathclyde University in Scotland and the first fourteen years of his career at Unilever. Over the two decades since, prior to joining fme, Ian has been involved in the Chemical and Pharmaceutical industry, from every angle of drug development, specialising in NMR, IDMP, RIM, medical devices and regulatory solutions for some of the largest Life Sciences and Pharmaceutical companies around the globe. He has been involved both from an industry and technology supply side, in roles held at Oxford Instruments, BioStorage Technologies, Samarind and Amplexor. Ian enjoyed living and working internationally during his consulting career and brings more than 20 years of experience in the Life Sciences industry to his new role at fme.

 

 

fme Life Sciences is a leading provider of business and technology services supporting the deployment of Content Services and ECM solutions to its clients in the Life Sciences Industry. We act as a trusted advisor and systems integration specialist across the Clinical, Regulatory and Quality and Manufacturing domains in Europe and North America. We do not exclusively recommend or promote any platform or vendor, but rather we focus on providing our clients with an independent perspective of the solutions available in the market.