May 3, 2018 | by Jörg Friedrich | 0 Comments

Sometimes not the leading edge technologies are causing you headaches, but also solid requirements like synchronizing your Document object’s attributes with SAP.
In this blog post I will explain the differences and purposes of the OpenText Documentum Archive Services for SAP and OpenText Documentum Content Services for SAP as well as the challenge to synchronize only modified SAP data into OpenText Documentum.

OpenText Documentum Archive Services for SAP
The main purpose of the OpenText Documentum Archive Services for SAP (ASSAP) is to accept content (e.g. the printable bill) delivered by SAP. For this, the ASSAP exposes as ArchiveLink server. With the ArchiveLink protocol, SAP is not only able to archive content but also able to retrieve that content for display purposes. Such content can be for example billing documents. So the active part is SAP. OpenText Documentum is the passive part. The ASSAP will create the link information with SAP archive maintenance data.

OpenText Documentum Content Services for SAP
In short, if you want to interact with SAP in another way than using the ArchiveLink protocol, you need the OpenText Documentum Content Services for SAP (CSSAP).
The mapping is defined by the Document configuration objects, which are text based.
Configurations can be e.g. a mapping of a SAP attribute to an OpenText Documentum attribute or a query executed either against OpenText Documentum or SAP.

Scenario 1:
If you have the content first in OpenText Documentum and want it to link with SAP, a barcode link is a possible way to archive this. The available barcodes inside OpenText Documentum have to be published to SAP. With this, the user can link e.g. a SAP billing document with an available (external) barcode. After this, the Document object (and its content) is linked to the appropriate SAP object and the content can be retrieved via the ArchiveLink interface (exposed by the ASSAP).

Scenario 2:
You want to pull metadata e.g. from a billing document from SAP to OpenText Documentum. For this, suitable search criteria are needed (the receipt number, the ArchiveLink ID, etc.) to build a query against SAP. All matching SAP objects are then mapped to OpenText Documentum objects and all configured attributes are copied from the SAP object into the corresponding OpenText Documentum object. Mapping of attributes might not need necessarily to be linked to the same OpenText Documentum object holding the content. It is up to you how you want to define the mapping rules.

Scenario 3:
You can also modify SAP data based on OpenText Documentum objects’ values. However pulling data from SAP can be performed with standard RFC calls. For pushing data to SAP, the best way is to utilize a custom BAPI created especially for that purpose.

Delta Synchronizations
The CSSAP works best for one-time synchronizations based e.g. on missing data or on specific lifecycle status names.


However with some tweaks you can get the best value of CSSAP and enable delta synchronization abilities:

  • Perform a query against SAP to determine all modified SAP data. For this, weave into the SAP query conditions (query parameters) conditions for querying e.g. a modification date.

For example a regular query parameter inside ASSAP looks like this:

Modification Date=20170401

Change this line e.g. to

Modification Date=‘00000000’ OR AEDAT GE ‘$modified’ OR ERDAT GE ‘$created’
$DQL=select max(any_tracking_creation_date) as created, max(any_tracking_modification_date) as modified from your_object_type

With this change all creations (field ERDAT) and modifications (field AEDAT) can be identified.

  • Based on the result of the above query mark all OpenText Documentum objects subject to synchronization (to be more fault tolerant).
  • Now perform a query against SAP to match all objects in scope of the synchronization. Inject e.g. the receipt no into the SAP query of all pending OpenText Documentum objects. Consider to limit the amount of injected parameters (e.g. the amount of receipt numbers and by this the amount of retrieved receipts) to avoid a synchronization process which will span over a too long time period. Due to the split into a mark and a synchronization step, postponing object synchronization to subsequent synchronization runs is no problem. Either abort of the synchronization step is no problem. The next run will synchronize missed objects. The limitation of the amount of objects can be done e.g. by the DQL hint “return_top N”.
  • Consider increasing the amount of objects within a single transaction (CSSAP configuration) from 1 (default) to a huge amount of objects. By this, the mark step will get fault tolerant and nevertheless the amount of objects processed by ASSAP is limited by the above mentioned object limitation (enforced by the hint “return_top N”).

All taken together, you can achieve a delta metadata synchronization from SAP into OpenText Documentum.

Don’t hesitate to get in touch with me or to share your thoughts and ideas with me!

May 2, 2018 | by Maximilian Krone | 0 Comments

 

 

This blog article looks at OpenText’s Documentum REST API and provides insight into its technology, basic functionality and extensibility capabilities. Here I report from my own experiences and I am happy if I can help other “techies” like me a little with it 😉

What is the “Documentum Rest API”?
In principle, the term Documentum REST API refers to a web interface introduced with Documentum 7 that allows access to objects and functions of OpenText Documentum. This is based on Spring-Boot, is delivered as a WAR file and must be installed on an application server – e.g. Apache Tomcat. This interface can be used to write customized clients, apps, or plug-ins of other systems.

read more

Mar 29, 2018 | by Dirk Bode | 0 Comments

It is an astonishing phenomenon: Germany only occupies regrettable midfield positions when it comes to digitalization . However, in discussions with company leaders or at meetings of associations it becomes clear that many of the leaders and managers are almost annoyed by the topic. A spokesperson for the association even apologizes for addressing the subject of digitalization at all. Of course, there is often not much fundamentally new knowledge about digitalization in general to gain in lectures and seminars when you have already attended numerous lectures, visited Silicon Valley, admire Israeli startups and experience Asian digital power. I even believe that in many cases the top level management has largely understood the topic – it is most certainly high time for that.

However, digitalizationis not a phenomenon that is solved at boardrooms with an army of assistants and “SWAT teams”. The general consensus is that the challenge can only be met successfully if all of a company’s talents are gradually taken on a journey into the digital future. The corporate culture – here too there is agreement – is the key to success in the digital world. Companies need more speed, agility, creativity, innovation, experimentation and networking in order to successfully face the digital “VUCA world“.

It is in fact unfavorable if the entrepreneurial leadership is already annoyed as soon as the subject of digitalization is discussed here and there.


The digital experience of revival or “when the penny drops”.

Even the most experienced digitizer had his personal awakening experience at some point – the moment when the penny dropped, when everything came together; the moment when it became clear: Something really big is happening and I know where it is coming from, why it is happening, what effects it has and how we can react to it. Exactly this experience is what every employee in the company needs – of course this is also possible without a trip to Silicon Valley, Israel or China – we are happy to help you 😉

Only if all forces in the company understand the risks and the incredible opportunities of digitalization can they pull together in the long term. Even today, stories of UBER, Netflix, Airbnb and Amazon or pictures of the papal election in the past and today still help.

So the next time the topic of digitalization comes up, or one of the well-known examples is attempted, then just remember that perhaps the person next to you is having his or her personal digital revival experience and refrain from the usual sayings like “do I already know, can I already, I was there already”. Even with many entrepreneurs the penny has not yet fallen – do not spoil the moment for them, but rather help them to understand the topic by new examples and pictures. Or use the time when things repeat themselves to develop concrete ideas and measures on how you will take your entire company on the digital journey.


Become a patient digitalization ambassador

Don’t let digitalization get on your nerves, but become a patient ambassador who constantly explains and explains why digitalization is at the same time the greatest challenge and opportunity for every company since the industrial revolution. Of course, concrete implementation is a prerequisite for success in the end, but without employees who have understood what it is all about and who have understood the opportunities that lie dormant in the topic, implementation will not succeed.

 

Mar 23, 2018 | by Rolf Krämer | 0 Comments

As a member of the Virtual Reality team here at fme I want to give you a bit of an insight today into the last three years of our project work. One of the most important facts is to let the client not only see but experience the product. Virtual Reality (VR) and Augmented Reality (AR)are two key technologies to achieve this. Here is a short abstract of our project history

March – August 2015: The first Project with Oculus Rift DK2
In March 2015 we started a VR development project for one of our clients. Their wish was to show their products in the VR world. The end customer should be able to move around the product and even switch the product configuration while in Virtual Reality. We were using the DK2 and Unity to realize it. The main challenge was a stable rendering for both eyes. The frame rate (images per seconds) dropped significantly which is defined as lower than 60 frames per second. Thus, the user became VR sick. We solved this by removing unnecessary 3D Model details and integrating a better-balanced lighting system.

Lighting is one of the most expensive calculations in 3D, because the lighting controls the look of the material in 3D. The result was a close to realistic looking product in VR, with an acceptable frame rate.

Challenge accepted and won: Realizing a great frame rate without loss of quality

December 2015 – March 2016: Let’s go to Cardboard
For displaying our abilities and knowledge in VR and 3D technology to our clients, we designed a cardboard app. Inside the app, users can see three motorbikes within a garage. The user can change the color of the bikes and take on three different positions in the scene. The app runs on android devices with the calculation power of a Samsung S6 and above. The main challenge here was accurate head tracking. Android devices have the sensors for a three axis (X,Y,Z) gyroscope, accelerometers and magnetometers as separate sensors. The android API itself does not cumulate these sensors signals to a single result. However, this result is necessary for head tracking. Therefore, we had to cumulate these results with our own algorithm. In the cardboard API version two by Google, an algorithm is already implemented. Unfortunately, this algorithm is not stable enough to avoid drifting. Drifting is the effect, when you move your head around Y Axis with 360-degree movement the start point will differ to the end point.

Challenge accepted and won: Avoid drifting

May 2016: Vive and more
We finally started the project. The user can switch on the car lights and rotate the car. In addition, he has free movability in the scene. The challenge here was to find a way for how to move. The problem of the movement was solved by using a teleporter. In this kind of movement, you can select a point in the scene and teleport to it. This is one of the most common movement concepts in VR, especially on the Vive. The controllers themselves have many possibilities for user interaction. A special one is the touchpad on top of the controller. It is sensitive to the touch point of the thumb and very sensitive to movement. The touchpad is responsible for the car rotation.

Challenge accepted and won: User Acceptable motion concept

May 2016: HoloLens
Another way is Augmented Reality (AR). In AR, the virtual product will blend into the real world. The user captures the room with his smartphone and on the display the camera stream and the virtual product are melted together. The HoloLens combines VR and AR. The user wears it on his head, so he is looking through glasses on which the VR world is projected. Cameras in the HoloLens are measuring the room and create a live 3D simple mesh representation of the room. This helps the software to know, where the user is located and where the virtual products are positioned.
We have created an App for the HoloLens, which allows the user to rotate, scale and place a car in the room. In addition it is possible to turn on / off the lights of the car. One special feature is the speech controlling.
The challenge here was to find a way of accurate rendering. The model was rich in geometry and needed to be reduced for an accurate rendering. The HoloLens itself is a standalone device. Thus, the HoloLens has limited calculation power.

Challenge accepted and won: Presenting complex products with high depth of details in a good performance

August 2016 – January 2017: 360° View
A client had the wish for a 360° view in their app. Therefore, we implemented a 360° Viewer that works on iOS, Android and Win Phone. The challenge here was to find a way to have a simple reusable control, which works on every platform. The solution was a Xamarin control. For Android and iOS, the renderer for the View is written in OpenGL and GLSL. The Win Phone/ Windows solution is written in DirectX and HLSL. For a better understanding, GLSL is the shading language for OpenGL and HLSL the shading language for DirectX. Shading languages are used to calculate on the graphic device (GPU) directly. Another challenge was the tracking of the device’s orientation. Therefore, I wrote a special solution for iOS and Windows. They have special chips with an accurate measurement of the orientation of the device. For Android, I took the solution similar to the Carport app as described earlier.

Challenge accepted and won: Generic interface development while creating individual platform solution for specific chip sets

That is just a little overview of what we can do in VR, AR, Mixed Reality and general in 3D Solutions on every platform. Let the future bring more 🙂

The next topics are already in the pipeline and we will keep you updated on:

  • Technical side of Oculus
  • Technical side of Vive
  • Technical side of Cardboard & Co.
  • Technical side Hololens
  • The next generation VR/AR Glasses
  • Mixed Reality
  • Realtime VR streaming out of the cloud
Mar 5, 2018 | by Jürgen Wagner | 0 Comments

The term “digital transformation” as the great revolution is currently present in all media and must not be missing in any vision. But I often see the confusion surrounding this buzzword and would like to share my thoughts on it.

Digitization – Digital Transformation
First of all, I notice that the term “digitization” is often referred to as “digital transformation”. In my opinion, this should be clearly separated from each other.

I see digitisation as the transition from analogue to digital. This is, for example, the replacement of paper documents (holiday request, material requisition, travel expense report, scanned invoice) by forms/dialogues on the intranet. The advantages compared to conventional in-house mail with only one analogue copy are clear: faster processing on the computer without waiting times. And other new technologies and applications (e. g. mobile devices, sensors, networking and apps, frameworks, cloud storage…) ensure that more and more areas are opened up for digitization by computer science/IT.

Although this is a change (transformation) it does not mean a great revolution, but rather an expected steady growth, as it is to be followed e. g. also with progressive motorization and automation. With the usual procedures “look for qualified employees and regularly train them further; adapt the product range to the latest technology, remain innovative in your own core business” you will continue to do well… or should we use the subjunctive “would”?

An important point is the progressing digitization as a basic building block for the real “Digital Transformation”: the digital twins! The images of the analog business objects of the real world are now digitally and cross-linked available.

A striking technology for this is the emerging Internet of Things. In my opinion, this is still very experimental at the moment with funny gadgets or questionable pseudo products… But also the first steam engines, airplanes, computers and mobile phones were smiled at.

Disruptive developments
Undoubtedly and very attentively to observe is the immensely increasing generated connected data volume. Today, the technology is also available to record these data volumes as Big Data. In addition, the methods of artificial intelligence (AI) and machine learning (ML) have been added to the processing and use of this data. These data volumes can thus be controlled.

With these new and simultaneously occurring aspects, all the necessary ingredients for a disruptive development full of force and speed come together, as if the three aspects would unite as single waves to form a tsunami-like “monster wave”.
And it is precisely this wave, which is rolling towards established companies, that must be taken into account!

(Digital) transformation
This is because the processes, from customer requirements to product selection/purchase to service and support, are subject to constant change. With changing business models or completely new distribution channels, the importance of companies’ product know-how and the experience of employees towards a unique selling point can decrease. From my point of view, only these completely changed processes are the “digital transformation”.

Let’s get back to the paper documents mentioned above and their digitization in the Intranet/Internet. Is it possible to approve these documents automatically? Can an AI learn to handle the simple cases autonomously, send only complex cases to manual approvers for manual approval, and even discover hidden errors and irregularities using statistical means? I think so, yes. And only then do I see the processing transformed digitally.

And so, even established companies have to expect that their rigid sales/service routines will be replaced by completely different processes in the newly established competitors, that they will be overtaken by the competition, in a surprising way, so to speak, previously assumed to be unthinkable or unlawful. To put it vividly,”at excessive speed” or “on a forbidden track”… and that, in my opinion, is the real meaning of digital transformation: there are new players/competitors and the “cheat/trick” because they don’t adhere to the usual mechanisms and surprise everyone:

  • … there is someone who runs the 100 meters at the World Cup with a short cut and wins – this is only unthinkable, not yet happened but not explicitly forbidden in the rules?
  • …. or jumps under the crossbar and wins – there was something once? Right: Dick Fosbury 1968 at the Olympic Games in Mexico.

The competitors of the straddle style were unfortunately no longer satisfied with the tried and tested methods “sifting of the best talents, hiring the best trainers and organising the best training camps”.

The solution-invariant customer problem
The Fosbury-Flop has efficiency advantages, because the centre of gravity during the jump curve is always below the high jump batten to be crossed by the body. This corresponds to the “solution variants (customer) problem”.

This is exactly the view you as a company must have of customer problems. In other words, check your products, services and solutions in the digital environment to see whether similar (previously unthinkable) advantages in business processes can be made visible and usable in the future by networking, IoT, Big Data and KI/ML.

For example, are you a manufacturer of measuring instruments? Your customers don’t really want to buy or own any measuring devices. They only need them to check their production facilities and the quality of the manufactured products. Another process (for an exemplary simple case) could be to use the motion/vibration sensor of a mobile phone to record the resonances/oscillations of the production plant. The digitization would be accomplished and they would have the “digital twin” of the required data. These could now be compared with large quantities of known characteristic curves of intact and faulty plants (e. g. in the cloud at a service provider) of an AI. This would have involved a change from the acquisition/ownership of products to the results service.

There are similar ideas among the major automobile manufacturers who are pursuing a move away from the “owning a vehicle” model towards “mobility solutions”. It is only such changes of processes and business processes that I would like to call “digital transformation”.

Your Digital Transformation
Finally, I would like to point out the need for action in view of the possible disruptive developments. Examples such as Uber and AirBnB have shown that it is not enough to keep ahead of well-known competitors. You have to face completely new competitors and offers that can appear very fast, surprising and dominating.

Test your digital maturity level, make a workshop on new ideas/procedures and their implementation with your IT. Be ready for your own digital transformation and feel prepared for the new approaches of your competitors. We will be happy to support you!

Feb 20, 2018 | by Steffen Fruehbis | 0 Comments

In the regulated life sciences environment, the management of controlled documents such as SOPs (Standard Operating Procedures), procedural instructions or work instructions is of great importance. Change management processes ensure that these documents are properly revised, approved, trained, distributed and, where necessary, suspended. In addition to well-known use cases within change management, there are special cases that are handled differently from company to company.

One of these applications is the rare case of so called Effectivity Hold.

 

Case Study
A manufacturing life sciences company is installing a new centrifuge for a production plant producing drug substances. In order to avoid operating errors, every employee who will operate the device in the future must undergo compulsory training tailored to the device. Within the training course, among other things, the work instructions created for the device are explained.

The work instruction passes through the obligatory document management processes prior to the training. Within the company’s own electronic document management system (DMS), those responsible for production operations approve the work instructions digitally. The approval takes place on 15 January. A document coordinator specifies February 1 as the effective date for the document. The device is to be used as of this date. The DMS ensures that the working instructions receive a corresponding electronic watermark automatically on February 1. Furthermore, the DMS automatically copies the watermarked document into the company’s own intranet (publishing).

Due to regulatory requirements, the work instruction may only be effective after all relevant employees have been trained by the DMS.

Two employees cannot attend the training due to illness. Since the training took place only a few days before the effective date and not all employees were able to complete the training, a responsible document coordinator must ensure that the document will not be effective from 1 February. Only when the remaining employees have completed the training, the work instructions may be effective. To do this, the coordinator sets a lock on the document: An Effectivity Hold. After the employees have been trained, the document coordinator removes the lock from the document. As soon as the lock is removed, the DMS transfers the work instruction to the effectivity status including the above-mentioned publishing.

 

Solution from the user’s point of view (excerpt)
The document coordinator sets an Effectivity Hold to an approved work instruction.

A reason must be given that is recorded in the audit trail of the document:

 

The attribute page of the document displays the Effectivity Hold information read-only:

 

Technical implementation (excerpt)

The outlined business process is implemented on the basis of the OpenTextDocumentum Life Sciences Solution Suite. Customer-specific adaptations at D2 level ensure that the Effectivity Hold functionality is seamlessly integrated into the Life Sciences Suite.

A D2 Property Page provides the input fields for the document coordinator:

 

A D2 uniqueness condition within the D2 lifecycle blocks the effective setting:

 

Do you have any questions or wish further information?
Simply fill in our > contact form or visit our > OpenText Documentum Life Sciences Solution Suite landing page.

Feb 5, 2018 | by Antje Duffert | 0 Comments

 
Working Out… what? It does not mean shouting at your colleagues. It is a program, developed by John Stepper, to make your work visible. It means sharing – especially knowledge and appreciation. It means the confrontation with myself and a topic that is important to me. I think that Working Out Loud, short WOL, strives for a change of behavior: away from knowledge silos to willingly sharing without ulterior motives.

A few weeks ago, I, Antje Duffert, Consultant for > Communication Services, started a call within the fme AG, which was aimed towards my colleagues, who would like to try something new with me and who wanted to advance the culture of our company even more towards collaboration and openness by an independent and target-oriented work.

I am part of a WOL circle with three other colleagues and we are at our half time. That means, we already met 6 times from 12 weekly, one-hour meetings (circles) to work for our personal goal, which we defined before, and to support each other.


How have I experienced the last 6 weeks?

John Stepper developed a circle guide for every meeting. All guides are well structured and described. The contained tasks in these guides are designed for an hour and can be handled very well during this time. Thus, it is not necessary that I deal with the contents and tasks before the meeting. Therefore, I can also participate without any time consuming preparation. During the meetings, I got to know a lot about my colleagues, business related facts as well as private aspects.


My biggest surprise (so far)

The topic during the fifth circle “50 facts about me” was my biggest surprise so far. In this exercise, each of the participants writes down 50 facts about himself/herself, and this list does not exclusively focus on professional achievements or anything like this – but on seemingly simple things referring to one’s own person. This exercise broadened my perspective about what I am able to do, what I know and what I have experienced so far. It helped me to understand that I can share many topics with people which I would not have regarded as interesting before. At the same time I learned facts about my circle-colleagues I would probably not even know after 3 years of close cooperation.


My biggest challenge (so far):

It is my biggest challenge so far to take the time to work further on the topic WOL. I sat in our meetings quite often and had ideas or plans which exceeded the regular one-hour meeting slot. However, I did not find the time to follow up on these ideas in my every-day work. That is one more reason why I really like the structure of the guides: I can jump right into the WOL-topic at any time, even if, for example, I had another meeting before.


Do I already feel a change in my behavior?

After the 6 weeks, I feel at least a change in my perception. The perception regarding my own experiences and my knowledge, which I can share with others – for this purpose, I can recommend to anybody the exercise “50 facts about me”. Additionally, I think I overcame an obstacle of which I did not even know it existed. One aspect is to share my own knowledge with others, but to get in contact with people who are working on exciting topics is something completely different. To get rid of the opinion that it is necessary to have a special reason or something to offer to contact another person. The appreciation of my own skills definitely increased.

 

Now I am looking forward to the following six meetings with an intensive exchange and I am excited about my conclusion as well as that from my circle members. To be continued…
Also interested in Working Out Loud? All information and circle guides are available at > www.workingoutloud.com.

 

 

Jan 18, 2018 | by Antje Dwehus | 0 Comments

Blogpost | Traffic

Who hasn’t already made the painful experience of how bad communication has led to additional costs and reduced the quality of a project at the same time? We don’t want that to happen at all and therefore most project leaders today are very aware of the importance of communication within their project.

To help keeping your communication straightforward in a migration project, let’s have a closer look at the mapping specification as a crucial document in migration projects.

 

Mapping Specification

The mapping specification defines which element (in the majority of cases this will correspond to an attribute) from the source system is mapped to which element in the target system and what kind of transformation rule is applied during this process.

Our best practice is to list each element, even if there will be a simple 1:1 mapping or information won’t get migrated to the new system at all. This makes the list complete, comprehensible and ready for adaptions in case of changes – which we would love to prevent, but, as we know, are likely to occur as the project moves on.

 

Transformation Rules

Very important when defining the transformation rules is the definition of exceptions.  For example what if:

  • An author in the source system doesn’t exist anymore in the target system because he/she has left the company.
  • An attribute contains a value that is not expected, like a state 5 when only having defined mapping values 1 to 4 (see table 2 below)

 

Please note that the argument “It’s not possible.” should only be a reason for not defining an exception rule after thoughtful analysis.

 

Documentum Sharepoint Rules  
object_name Name

1.       The special characters (~ # % & * { } \ : <> / + |“) are replaced by „_“

2.       The name is shortened to 127 characters

3.       If there are two elements with the same name within the same folder, add a unique number to the document name

See Sharepoint limitations in concept
r_creator_name Author

1.       Map value against mapping list “user_mapping_list”

2.       In case there is no matching value, use domain/migrationUser

Table 1 – Extract from transformation rules with the definition of an exception

 

Mapping Tables

In most of the projects Excel files are sufficient for the migration specification. They should contain the mapping tables on dedicated sheets. If the file is stored at a central location, preferably a system with version control, and accessible by the project team, everybody is up to date at any time.

 

status source status_target
1 new
2 in work
3 approved
4 obsolete

Table 2 – Status mapping

 

Conclusion

A detailed mapping specification is a must-have in all migration projects. All team members have to  be able to access it. It should be a binding document and updated in case of changes during the running project. In most projects, MS Excel is the tool of choice for creating and maintaining the mapping specification.

 

Blogpost | Screenshot mc 01
Mapping rule exception for document owner as seen under Transformation Rules listing point 1


Contact usmigration-center.comSubscribe Newsletter

Dec 4, 2017 | by Markus Oponczewski | 0 Comments

Last week I attended the Tech-Conference of Amazon Web Services – AWS re:Invent 2017 in Las Vegas. It lasted five days, a period of time that is not always easy to take off from your daily work. Following are the most important pieces of content from my perspective in 7-10 min for reading.

* 10 Seconds Management Spoiler *
Serverless, Machine Learning, the Machine Learning Camera DeepLens, Alexa for Business and Kube as a managed service are the main highlights of this year’s re:Invent. By extending and making existing and established services such as EC2, S3, Glacier or DynamoDB more flexible, AWS helps customers to map many requirements directly in the managed service and reduce the need for workaround implementations. It will be fascinating and at times frightening, what will be possible in the future due to the combination of these powerful services.

Amazon Web Services demonstrated at its own superlative Tech Conference – AWS re:Invent – in Las Vegas more than impressively who the global leader in the cloud business is. Furthermore, AWS has also shown that they are not resting on their laurels, but continue to set and drive the pace of global change in IT through continuous innovation and technological competence. In my opinion, there is another fact which ensures that this will remain so for some time to come, namely that the image of AWS has been carefully built up and maintained. AWS does everything to support customers and users. Customers get a huge set of components and services that fit together seamlessly, allowing them to concentrate on their core business tasks much better. And for the worldwide developer community, AWS is also extremely attractive: they are courted, supported and valued, and this pays off in order to continue to lead the market.

Back to the conference. More than 43,000 participants have registered and participated in more than 1,300 breakout sessions, workshops and demos over five days at six hotels along the Las Vegas Boulevard. The keynotes of Andy Jessy (CEO AWS) and Werner Vogels (CTO Amazon and AWS Mastermind) together lasted over six hours and included more than 25 major announcements of existing services and completely new products. Important innovations can be found in several areas, in the following please check my list of this year’s re:Invent announcements. However, there were so many announcements and in detail so many product improvements that only the most important announcements are described here from my personal perspective.

1) Infrastructure + Storage
The highlight at the infrastructure level is certainly the integration of Kubernetes as a Managed Service (EKS – Elastic Container Service for Kubernetes), which is expected by many people. AWS fully relies on compatibility with the official Kubernetes releases but integrates Kube with AWS’ own network capabilities (VPC) and access controls (IAM). AWS Fargate for ECS and in the future also for EKS makes it possible to roll out and operate containerized applications without any administration effort.

S3 Select and Glacier Select allows to query subsets of existing S3 objects directly, which improves performance and reduces own workaround solutions. In the case of Glacier Select, archived and thus actually dead data can also be queried within minutes, and this data can then also be used for analysis purposes, thus experiencing a completely new significance.

EC2 Elastic GPU: GPU services vary widely in demand, and the use of EC2 GPU instances often requires provisioning of appropriate instance sizes without the need for GPU performance on a permanent basis. With Amazon EC2 Elastic GPU, GPU performance can be configured independently of the used EC2 instance and flexibly according to actual needs, i.e. if the need for GPU performance is high, scaling is increased, if the need for GPU performance decreases, GPU performance is reduced again.

2) Lambda and Serverless
.NET Core and Go as programming language for Lambda functions will be supported in the future and the memory limit for Lambda functions will be increased to three GB. With the AWS Serverless Application Repository, third-party providers will also be able to offer their serverless applications to other customers, in principle a form of marketplace for serverless applications. Overall, AWS also aligns other services for serverless applications and supports the development of serverless applications.

3) Cloud9 IDE
AWS has now its own IDE, which runs completely in the browser. This significantly improves the development of serverless functions for development, deployment and testing, including local testing of functions and Cloud9 IDE is closing the gap in the development of serverless applications. Cloud9 also supports the pairing of developers very strongly, i.e. they can work together, remotely or locally, on the same code. Other collaboration methods are supported, for example release workflows in the team or chat functions.

4) Databases
More extensive enhancements to existing database services are available for AWS Aurora database (the fastest growing service at AWS) and DynamoDB. Aurora for Serverless supports when workloads fluctuate or are difficult to predict and scales flexibly with actual demand. With Aurora Multimaster, worldwide zero downtime scenarios can be supported.

With DynamoDB Global Tables and DynamoDB On-Demand Backup, data is automatically replicated between AWS regions and can be backed up in a standardized way. With DynamoDB, AWS closes the gaps you expect from a fully managed service without having to implement your own workaround solutions.

Amazon Neptune as a completely new database service completes the NoSQL portfolio of AWS with a graph database. Amazon Neptune’s field of application is the handling of highly networked information, Neptune supports the most common frameworks TinkerPop and SPARQL for describing and querying graphs.

AWS now offers managed solutions in the entire NoSQL database context. DynamoDB as a database for key value and documents, Elasticache for in-memory and finally Neptune as graph database.

5) Machine Learning
This year’s and actual star among all the announcements is Machine Learning. Here AWS invests with power and you notice that they don’t want to leave the market to Google and Microsoft. AWS Sagemaker enables developers to configure machine learning algorithms and use them for their own applications without having to have too much background knowledge in the algorithms. Amazon is thus accommodating the developers. Sagemaker can also be used for training models based on Apache MXNet and Google’s Tensorflow.

AWS DeepLens is a camera optimized for machine learning scenarios and supports business models where image analysis and real-time response are important, Big Brother for everyone and every business model. In my opinion, Amazon’s next hardware hit and finally the proof that AWS and Amazon are leading in innovation. It will not be long before Google and Microsoft follow suit with their own hardware.

The services AWS Transcribe, AWS Translate and AWS Comprehend are provided for voice analysis and processing. Kinesis Video Streams and Amazon Rekognition provide the basis for custom solutions for real-time video processing.

6) Alexa and Voice User Interfaces
With the Echo devices and the opening of Alexa-Skill development to the developer community, Amazon has set the standard for voice interfaces, especially in the consumer market. Alexa for Business expands its portfolio now in the direction of business. Not only can it be used to develop and cover new applications, Amazon has demonstrated how to increase the efficiency of meetings through voice control, but it also simplifies the administration of a large number of voice-controlled devices in the company. It will be exciting to see how the Alexa community reacts to this, there will probably be a lot of new and innovative solutions.

7) AWS IoT
Six new service announcements alone were announced at re:Invent for AWS IoT. With IoT Device Management and IoT Device Defender, AWS simplifies the management of a large number of IoT devices and the creation and maintenance of corresponding security policies. IoT Analytics supports the special analytics requirements on IoT devices, Greengrass ML Inference brings Machine Learning capabilities directly to the IoT device and thus reduces transport and preprocessing of device data in the AWS cloud. With Free RTOS, AWS finally introduces its own device operating system to the market in order to meet the special requirements of preprocessing data directly on the devices.

Lot of fun was participating in a workshop on the preview of Amazon Sumerian, a 3D development environment for VR and AR that runs directly in the browser and is relatively easy to use compared to existing 3D engines. Sumerian offers a high level of integration with Amazon products such as Echo devices and AWS services and it can play a major role in improving the user experience for own business models. Currently Sumerian can only be tested in a pre-release version, but should be kept in mind in the medium term, especially for sales and marketing activities.

In addition to the service announcements, there is also a standardization in software development and cloud infrastructure that not only affects AWS, but in which AWS, like other vendors, is guided and supported by tried and tested procedures and best practices. Support for pairing in software development in the Cloud9 is an indication of this, and the use of standard frameworks such as Kubernetes or Google’s Tensorflow or Go for Serverless functions clearly underscore this.

However, the most important feature that runs through all services at AWS, is the importance of IT security for data and service endpoints, which is non-negotiable and not really visible. Luckily, IT security is positioned and implemented very prominently at AWS.

Nov 22, 2017 | by Jennifer Utech | 0 Comments

The first two field reports of my fellow apprentices have already given you some insights into the apprenticeship here @fme. They described our first coding tasks, databases and object oriented programming. However, we learn a lot more than that. I want to talk about our own apprentice project vistaya, explain what vistaya is, how we were introduced to vistaya and what our current responsibilities are.

What is vistaya?
When we first heard about vistaya, it did not mean anything to us. We were allowed to try the app ourselves and were really impressed.
Marc Zobec, a former apprentice at fme, had the idea to develop a registration app for a visitor management system. As part of his final project, he developed the first version of vistaya as an iOS version for iPads. You can still find this prototype at our headquarter here in Brunswick.

It soon became clear: this app does really help our reception team! Our lovely reception women could be disburdened and now have more time for other things. Even a lot of the visitors were interested into the app and asked if they could buy the software. As a result of that vistaya became a full apprentice project. Currently apprentices from the first and second year of apprenticeship are working on it.

Induction into vistaya
Our induction began with a short introduction from the second year of apprenticeship. We were surprised to hear that they were even allowed to lead client meetings in their first year of apprenticeship already. The following week they had a presentation for a potential client scheduled. Another apprentice and I were offered the opportunity to sit in and listen to the meeting. In additional meetings, we learned more about vistaya, practiced how to present the app and were shown how to edit the corresponding vistaya website. We took over more and more responsibilities and were allowed to hold our first client presentation and present the app to an interested company.

Our current responsibilities
Our current responsibilities are as diverse as the apprenticeship @fme. For one thing, we hold presentations and show interested companies our app, write emails and answer our client’s questions. On the other hand, we record suggestions and improve the app and add new features. Furthermore, we have designed the new vistaya website.

Conclusion
It is often thought, that programmers do only code. That is a very outdated picture from the daily life of a programmer. We still code a lot, but we also do presentations in front of clients, communicate with them and analyze what their wishes are. I am glad, that the apprenticeship @fme is not as “dry” as you would typically think and I am allowed to gain client experience from early on. By now, it is getting a lot easier for me to hold presentations.