BoxWorks – From the perspective of a Life Sciences Consultant

BoxWorks – From the perspective of a Life Sciences Consultant

Let me start with a quick summary of the highlights: Box Feed (available as Beta) Box feeds allow you to get a stream of activities happening in the shared folders which the user can access. This is a nice collaboration function as are you are seeing the content of your coworkers directly in the stream. This has been missing for a while and Box is now delivering a first iteration with the possibility for you to comment on the documents. Activity Streams and Recommended apps (2019) With this feature Box is now tracking the way the content is flowing within other cloud systems, so that co-workers can follow all the steps that have happened with the document along the way. For example, if you add a box document to a certain account in Salesforce and afterwards sign it with DocuSign, box users will see these actions as activities with the possibility of directly accessing the content or associated activity within these applications. Integrations with custom applications are possible here as well. Box Automation (2019) Box automation is a workflow/rules engine which will allow business users to automate content processes directly within the Box platform. There are a lot of possibilities with this new functionality from simple workflow approvals to slightly more complex processes. There are certain events available which can trigger the flow of your content, and the possibility of selecting from a number of possible actions. There are currently some mockups available but no beta version yet. Box for GSuite (available as Beta) Box is one of the first cloud providers to allow integration into the three big content creation/editing platforms. It now has deep integration into Microsoft Office 365, into Apple IWork and with this latest edition now with GSuite as well, allowing users to pick their favorite tool for content editing. Box Skills / Box Custom Skills Box Skills allows you to analyze and process content files. It consists of a UI part to display metadata and it also allows service providers to create custom skills. Custom Skills can extract information from audio, image and video files. There will be an SDK made available by the end of the year which will allow to extract and pull in any additional information as skill data which can be used for metadata and content retrieval. Box Shield (later in 2018) Box improves the security platform even more with the new Box Shield. For example, it allows you to prevent downloads of content with a certain classification, so that you can configure that only a set of users can access the document. It can also detect downloads of a large amount of folders and documents at once and will generate an alarm. Or it will detect a login attempt from another country. Other Product enhancements Box has integrated the ability to activate 2FA for external users. This is very important for larger companies to ensure enhanced security. These are all great services and enhancements and create a global collaboration and work platform, inter-connecting activities and systems of daily importance to the users.

But how does that fit into core Life Sciences processes?

At the conference there was a separate track with several Life Sciences sessions offered which helped to catch a good glimpse of the current situation. One of the big items was the GxP Validation package for Box that was announced beginning of the year (Box GxP Validation). This service package turns any Box instance into a validated platform which can host validated content management applications. Whereas non-regulated processes have been supported before, this now additionally allows for the coverage of regulated processes. More specifically, this package covers the following aspects:
  • Audit – Quality Management System: Box QMS Documentation and SOPs built on GAMP5 and ISO9001 standards
  • Validate – Validation Accelerator Pack (VAP): Validation lifecycle documentation and tools to make the Box instance GxP-compliant
  • Maintain – Always-on Testing: Daily automated testing reports on nearly 150 tests of Box functionality at the API layer; creates daily reports and artifacts for customers and audits
This definitely provides a good starting point for the deeper use of the platform in the Life Sciences industry, but will only be of value in combination with the robust regulated document management functionalities. In this regard, the message by Box is clear: Box does provide a hosted Core platform, industry use-cases and how to add additional business value through extended functionality and applications is in the hands of the customer and Box technology partners. In comparison with OpenText Documentum Content Management capabilities there are still some gaps such as the management of document renditions (specifically PDF formats) or the management of virtual documents and relationships. But with API and Web Services the spectrum of possible implementations (and maybe “workarounds” as needed) is large and highly extensible. Additional benefits on smaller effort scale can be achieved by just adding-on Box to the Core platforms; whether it is used as a publishing platform for Effective documents, as Collaboration platform for In-Progress documents internally but also with external partners (taking advantage of integrations with core business applications), or whether it is to take advantage of the Skills framework to allow the classification of documents before processing them within your Content Management platform (and by that relieving your business users of some of these classification tasks). Several of the vendors at the conference who are involved with the Life Sciences industry have discussed and demoed these kind of integrations to increase the value of their already existing applications. Overall, Box offers a great set of functionality and is creating a widely integrated work environment for users – a foundation for the “Future of Work”. It does not replace a feature-rich regulated Content Management platform such as OpenText Documentum at this point, but does offer a great platform for integrating business applications. We are excited to be able to support our clients who are leveraging this platform. Our first focus area is on content migrations and we are happy to announce that an extension of our Box Importer for migration-center will be available in Q4, with newly added key functionalities such as custom metadata, versioning and security support.
ECM in the Life Sciences Market: Bringing the Cloud Discussion Down To Earth

ECM in the Life Sciences Market: Bringing the Cloud Discussion Down To Earth

Clearly, the opportunity for Life Sciences companies to outsource their IT infrastructure and some related services is a derivative benefit of utilizing Cloud. In addition, subscription pricing, synonymous with Cloud solutions, has the benefit of moving CapEx to OpEx and realizing the financial benefits of doing so, appeals to many companies. There is also the potential benefit (yes, we’ll get back to that word “potential“ later) of native-cloud applications providing what I’ll call a comprehensive set of functional capabilities that serve the end user well and are supported by a non-intrusive (“you‘ll not even notice the upgrades“) platform architecture. If that wasn’t enough, who wouldn’t be open to and seduced by the thought of fewer application-to-application integrations, instant-on new feature capabilities and all for just a few dollars per user per month. I’m in! Sign me up!

I’m in! Sign me up! – OK, back to reality everyone

Ok, back to reality everyone. Don’t get me wrong, the move to native cloud solutions and the benefits of doing so for ECM solutions are going to be significant. But it’s likely going to take much longer than promised. Why? Because this is not just a change to our ECM solutions, it’s much broader than that. The exciting part for me of ECM moving to a Cloud based model is that software vendors are taking a much broader view of the end to end business needs of their clients. The process of developing native cloud applications allows these vendors to reassess what business problem needs solving and design accordingly. A good example of this is the interdependency of document management and regulatory submissions/RIM. In the past the submission and tracking business process was supported through, at minimum, three to four disparate standalone applications all with significant integration requirements. On a global scale this was often repeated on a country by country basis. The native Cloud application approach allows much of this complexity to be significantly reduced. The outcome? Faster, better, cheaper!

My concern is that we’re still in the “hype-cycle“-phase

So why am I so cautious? My concern is that we are still in the “hype-cycle“-phase and some of the key desirable outcomes presented in the rationale of why you should adopt a Cloud/SaaS solution are not realizable for many companies. Sure, there is great potential, but the business processes being supported in current so-called legacy solutions are complex and still require integration and a substantive and robust data model. That requires a degree of maturity in a solution that takes time to develop. I certainly recommend assessing the value for a company to adopt a Cloud/SaaS ECM solution, but make sure you undertake the necessary due diligence before you make the move. We’re talking about materially important business processes that are supported by ECM and failure in these areas by switching too early can have far reaching consequences. At fme we engage regularly with clients, large and small, who are considering the move to a Cloud solution. We also work closely with vendors of these solutions and those third-party vendors who have business applications that form part of the eco-system required to support the broader business processes. The good news for clients is that there are now credible options for native-Cloud, SaaS solutions. These solutions take the maturity and reliability of the so-called legacy platforms and offer it in an integrated, Cloud leveraged configuration, in a subscription pricing model. Arguably the best of both worlds. The next 18 months will be fascinating as we move beyond the hype-cycle phase and into a period where practical experience of the solution options plays a greater role in determining the path forward. I’m looking forward to walking that path along with you, our clients! If you’re interested, pleasecontact me to discuss this further.
Why OpenText Documentum products and Cloud Computing are complementary!

Why OpenText Documentum products and Cloud Computing are complementary!

Compared with cloud computing technologies that are very strong in providing elastic (scalable) services OpenText Documentum products could be regarded as inflexible and monolithic / layered applications. Although they seem to be the exact opposite of the flexible Microservice architecture approach used for cloud native application design, there are ways to combine OpenText Documentum products with cloud computing technologies. Here are some examples to give you an idea of how to achieve more flexibility by breaking either the OpenText Documentum base installation or business logic into smaller services.

Infrastructure Services

A classic OpenText Documentum Content Server installation could be split into four services / Linux containers:
  • The Docbroker
  • The repository services
  • The Java Method Server (serverapps.ear)
  • The Accelerated Content Services (acs.ear)
It is obvious that the Docbroker does not have to deal with much traffic and does not need any load balancing. Thus one or two instances at most could be sufficient (complemented with according health check monitoring) to provide a robust fail over. For the repository services, the Java Method Server and the Accelerated Content Services, two instances each are sufficient to provide a quite robust service. However, at some point you might want to perform a data migration into your repository with many documents for example. In this situation, you might think of hiring our highly skilled content migration team. During the migration period, especially with systems utilizing the Java Method Server heavily, exactly this service would become a bottleneck. All other server components would be able to handle all migration tasks. And here things become interesting: If you had used the service architecture as described earlier and you had been utilizing an orchestration tool, you would have been able to request two additional Java Method Server instances on-demand within minutes. The orchestration tool automatically creates two more instances, which are automatically proxied via a service end point. All upcoming migration requests are then spread over all existing instances, providing a good migration experience. Once the migration is finished, you can scale down the number of instances.

Business Logic Services

If you are using OpenText Documentum D2 e.g. and have potentially “heavyweight” logical services like watermarking, you can create real services (Microservices) and connect them to service discovery tools with load balancing/fail over aware clients (e.g. the Netflix OSS – Eureka, Ribbon, Hystrix). With this option, the watermarking service becomes scalable and flexible for any upcoming future needs and can be placed or moved to dedicated computing resources as needed. If at a certain point this service is identified as a bottleneck, you may instruct your orchestration tool to create additional instances of the same service. If there is a one-time event like a submission to an authority, you may also instruct your orchestration tool to create additional instances of the same service, but you are able to downscale the service after the specific event has finished in order to efficiently use your hardware resources.

Conclusion

Do not fear to use the best of both worlds! We will support you in combining both technologies and providing best results to you!
  • Analyze your existing OpenText Documentum infrastructure architecture
  • Analyze your existing OpenText Documentum software architecture
  • Create a roadmap with you on how to make your OpenText Documentum stack cloud computing-ready
  • Create best practices on how to create future components inside your existing OpenText Documentum stack to make them elastic and to comply with the concepts of cloud computing
  • Move application logic (where applicable) into elastic Microservices
We are looking forward to sharing our expertise with you!
Traditional or cloud-native? Why not something in between?

Traditional or cloud-native? Why not something in between?

Your on-premise cloud options at a glance

Starting from complete use in their own data center (on-premise), companies have the following options:
  1. Cloud storage: adding storage capacity from the cloud to your application.
  2. Re-platforming
    1. Lift’n’Shift: i.e. virtualizing or containerizing the application and hosting the virtual machines or containers with a cloud provider. Of course, provided that the containers are orchestrated accordingly, they can also be hosted in one’ s own data center.
    2. Lift’n’Extend: i.e. to containerize the applications and to enhance with other cloud services functionally, or to create new cloud-native functions. It is also possible to develop individual elements of the application, e.g. the clients, in a new and cloud-native way and to link them to the backend. Such solutions are commonly referred to as hybrid cloud applications; not to be confused with hybrid cloud concepts.
  3. Re-Factoring: i.e. to redevelop the application on a PaaS stack as a cloud-native application. If the solution is based on a commercial basic product such as a DMS that is not cloud-compatible, this method is not possible.
  4. Use a new solution from a SaaS provider.

Containerization as an ideal middle way between traditional and cloud-native

For example many organizations, are facing the decision to continue to operate DMS-based solutions at great expense and cost or to choose option 4, i.e. to switch to a SaaS provider. However, the options listed under 2 offer an ideal middle way with containerisation. The advantages of containerization have already been described in detail in the fme blog post “How can Linux containers save your day?”. Our “Distributed Environments With Containers” data sheet provides a more technical insight into containers. In particular, the use of containers in validated environments – so critical for the life sciences industry – is a prime example of the additional advantages offered by container technologies.

Your container advantages at a glance

  • Proven software and aplications remain in use
  • The user interface and user guidance remain the same
  • Faster, less error-prone, automated processes can result in faster application deployment and lower validation costs
  • The agility in projects increases
  • Upgrade paths are simplified: Entire migration environments are containerized, even with different host operating systems for which previously elaborately installed, own virtual machines were necessary
  • The applications are manifold!

Strong expertise from fme’s independent cloud, container and migration experts

The containerisation experts from fme disassemble the basic product, pack it into containers and reassemble it in your data centre or at a cloud provider, such as AWS, to form the new basic application. For some products, fme has already built ready-to-use containers that can be rolled out fast and easily. Subsequently, configurations and, if available, customizations are made. The new system is filled with the fme Migration Services and thus a 1:1 copy is created. What sounds so simple requires a high level of expertise and is associated with costs that are more than justified if one considers the advantages of containerization.

A useful example for Lift’n’Extend (2b): Connection of Alexa to a containerized DMS application

An example for the extension of a containerized DMS application with native cloud services can be seen impressively in the YouTube video Showcase: Alexa, please open Documentum! “ For this purpose, fme containerization experts have rolled out OpenText Documentum on AWS and fme AWS specialists have connected it to Alexa. We believe that Alexa skills can make your daily work easier. In environments where operating a system with mouse and keyboard is difficult, Alexa can use voice commands to find and read documents. A use case could be a laboratory where safety gloves have to be worn, but the user working there needs SOPs from a DMS system.
The OpenText Documentum REST API? – A field report from the developer’s point of view

The OpenText Documentum REST API? – A field report from the developer’s point of view

What is the “Documentum Rest API”?

In principle, the term Documentum REST API refers to a web interface introduced with Documentum 7 that allows access to objects and functions of OpenText Documentum. This is based on Spring-Boot, is delivered as a WAR file and must be installed on an application server – e.g. Apache Tomcat. This interface can be used to write customized clients, apps, or plug-ins of other systems.

Before the Documentum Rest API, the APIs DFS and DFC had to be used as an interface for business applications in order to interact with Documentum. DFS is based on SOAP and is therefore also a web interface, while DFC is based on Java.

The REST API consists of several residual web services that interact with OpenText Documentum. It is “hypertext-driven, server-side stateless, and content negotiable, which provides you with high efficiency, simplicity, and makes all services easy to consume” [Source: https://community.emc.com/docs/DOC-32266, date of collection: 29.11.2016 – 11:51]
Introduction to REST Services

To understand how the Documentum REST API works, the first step is to explain the basic operation of a general REST service. REST stands for Representational State Transfer and is based on a stateless, client-server and cache capable communication protocol. In almost all cases the http protocol is used. REST is additionally hypertext controlled, i.e. REST clients must have all the information they need at any given moment to decide where to forward them. Hypermedia connects resources and describes their capabilities in a machine-readable way. For this reason, a REST client only needs to know one thing to communicate with the REST server – the understanding of hypermedia.

REST itself is an architecture style for network services (web services), where complex mechanisms should be avoided as far as possible by using simple http calls.
This means that a REST web service and its functions can be called without much effort from a simple http client. A call via the browser – which is also based on an http client – is also possible if the web service has implemented a GET call for the specified service URL. It is because of the http protocol that the functions and accessibility of the REST services can be easily tested and used, for example, for the in-house development of a client whose code base can be determined by the user.

In addition, the REST API can be accessed from any client, provided that this client is on the same network and can authenticate and authorize itself. An installation of local software is not necessary for the accessibility – for the use of the functions only an http-Client must exist (e.g. browser or self-developed client).

Compared to SOAP, REST also offers a small performance advantage..

Functions of the services

The Documentum REST API is delivered with some REST services that allow you to call basic functions of Documentum. Here is a brief list of what I think are the most interesting services:
Object Service
This service allows interaction with Documentum objects inheriting from dm_sysobject, such as cabinets, folders and documents. You can use the service to display objects, create and change new objects (for example, check-out/check-in, new rendition,…) or delete objects.
Objects that do not inherit from dm_sysobject must be implemented through an extension of the REST API. Read more about this in the section Extensibility.

User Service
This service can be used to manage users, groups and ACLs. This means that user administration can also be accessed from an external source and easily adapted to your own processes and requirements.

DQL Service
The DQL web service is a service that allows you to execute DQL queries and report their results back to the client. By default, the DQL service only supports select queries for security reasons. Also for security reasons, changes to objects via DQL should be implemented using your own, secure services.

Example Call of the Object Service
Once the REST API has been successfully installed on an application server, provided the firewall rules have been set up correctly, it can be called by any client from the local network under the link “/dctm-rest” (example: http://vm-documentum-app-test:8080/dctm-rest for default Tomcat configuration). When calling up this URL, e.g. via the browser, you should then be able to see the following:

If the above link, adapted to local conditions, cannot be called either via the network or locally from the application server, the log files must be inspected. Most likely something failed during the installation of the REST API.

Clicking on the “Services” button would call “/dctm-rest/services” (example: http://vm-documentum-app-test:8080/dctm-rest/services).
The result, when the page is called up, should look something like this:

If you now call the first URL, for example, all Documentum repositories would be displayed:

The URLs contained in the entries could now be used to connect to the repositories using valid logon data, for example to execute a DQL query, check out a document, or similar.

Induction
With the appropriate knowledge of the architecture and function of REST services, training is relatively quick, since the Documentum REST API corresponds to a REST-compliant interface. In addition, OpenText provides many helpful tutorials and code examples that can be found on the Internet. OpenTexts code libraries, which are available in almost all common programming languages (Java, C#, Pyton, Swift, Ruby, Objective-C, AngularJS,…) on GitHub, complete the simple introduction.
Nevertheless, these GitHub libraries should be chosen with care, since you depend on the written code and the libraries can be adapted by the manufacturer at any time. Features could be different with a new patch and not work as before, which can lead to errors.
The other option would be to create a custom written code library that accesses the API. This ensures independence from the code libraries provided by OpenText and gives you more control over what happens in the foreground during development.

Extensibility
As already indicated in the point of the individual functions of Documentum REST Services, not every function of Documentum is enabled via the web interface described here in the standard system.
However, if functions that are urgently required for business logic are missing in the standard delivery of the REST API, they can be implemented using enhancements to the REST services. It should be noted that the familiarization is not quite as simple as, for example, for the initial entry of the creation of a document management client. However, thanks to the documentation of the REST API and good examples on the OpenText community page, most of the initial problems can be handled easily. It is implemented in Java and using the build management tool Apache Maven.

Docker Support
Since version 7.3, Documentum REST web services can officially be executed as a finished docker image in a Docker container. The support of Docker is in my opinion the right step to place the REST API as the primary web interface. Docker makes it easy to scale and migrate to other servers without much effort. For this reason, the finished REST API can be quickly integrated into the internal network and used, of course under the circumstances that the necessary Documentum infrastructure already exists.

Summary
Documentum REST web services offer a good and beginner-friendly way to interact with OpenText Documentum from self-implemented systems or clients. In my opinion, the quick training in the functionality and use of the API is especially positive. But you should be careful with special cases, because the implementation of an extension of the standard API is a time-consuming process and requires some training. The possibility of using the services from all possible programming languages and thus also with any operating system is also very nice.