4 minute read

The summary of where I paused

The main objective for this week was to get my hands on implementing to retrieve the inspiration from the end of last year. Management of engineering artifact often demands the creators and users to agree on how to manage them in a sustainable manner (e.g., directory structure organization, file name, versioning protocol, sharing, backup, reviewing process, etc.). My approach to accerlate the workflow of artifact development is to provide a platform for managing those documents. The platform takes care of the storage of artifact documents in a defined manner (e.g., naming convention, structure, and versioning) from the time of creation (ideally) and allows the creators and the users to develop the management protocol (e.g., changes to document A requires a review by person B, document C uses versioning scheme D, etc.) through a tangible interface (ideally no-code to extend the concept for everyone). The idea is that, throughout the development of artifacts, the management protocol also becomes more mature fitting with the different cultural and domain background of each stakeholder involved. An approach to describe the management protocol is to represent the relationship between heterogeneous entities in knowledge graph. The graph-based approch enables more flexible changes during the development of management protocol as opposed to the existing workflow description (e.g., BPML) and can be linked to concepts not limited to a single domain. Furtheremore, the resulting graph can be used to build applications to operate the system (e.g., QA system in manufacturing, BA system in buildings, network monitoring in IT, etc.) that is built with the engineering process. Now that I have multiple concrete examples of the engineering artifact and operational process as provided by Siemens and by the lab, I am starting to implement the platform to generate the knowledge graph for management protocol development.

Resuming the development mode

As I was claiming before, I chose Labeled Property Graphs instead of RDF tripestores to represent the KGs. Primarily because of the matured implementation of Neo4j and the clarity of Cypher in for SPARQL. In addition, with the neosemantics plugin, a Neo4j instance can generate and consume RDF (e.g., TURTLE) using a relatively straightforward mapping mechanism (https://jbarrasa.com/2016/06/07/importing-rdf-data-into-neo4j/) so that it maintains the compatibility with applications using Semantic Web or the WoT concept (like what Ganesh is doing). The first building block of the platform is a cross-platform desktop application that functions as a wrapper to browse files on the OS. The application can start monitoring the file when it is “imported” via drag-n-drop or simply selecting, and depending on the file content, it can run corresponding plugin to analyze the data to help the users to annotate the file, and also provide a (graphical) suggestion to use what kind of management protocol should/can be used to track the file. To build a prototype, I started using Wails (https://github.com/wailsapp/wails) to develop the frontend as a Web application and Go backend to be executed in Windows/macOS/Linux.

Summary of finding this week

  • Cypher: I gained better understanding of its syntax and the API both in Go/Python. I struggled with MATCH queries failing when a unique constraint was defiend – the order of node creation and relationship and the idea behind ON clause enables safe transactions.
  • Plugins for Neo4j: I tested the neosemantics plugin to consume/import RDFs on a local docker instance. The feature seems quite up-to-date with the release channel, and worked off the shelf.
  • “Online” backup of data from Neo4j server: As I experiment a lot with the graph, I frequently needed to rollback the graph to the previous state. With the neo4j-admin image, the database can be dumped into a single binary file; however, the server needs to be taken down offline. I’ve spent quite a bit of time to find out how it can be more effectively done, but ended up chaining docker-plugin hooks to destroy-backup-start the Neo4j instance. In addition, I’m thinking how I should track the executed Cypher queries to provide pseudo-versioning of the database.

Other stuff

  • I brought the Myo EMG band to refactor the EMG collector code to rejuvenate Felix’s demo, solving the Myo bluetooth driver issue. This should also be a good exercise to boost my general motivation.
  • Next Wednesday, I will visit Ganesh in Zug to discuss a potential IoT paper.
  • I heard from my doctor today that she mailed me the Krankmeldung for 17.4.-12.5 at 50% – this 50% is the number decided together with the doctor and through the counseling session this week observing how I have been doing in the past two weeks, while protecting safe-landing to a rather normal style. I will scan the document and forward it to Florence.
  • I recieved another import tax invoice from DHL. I want to inform DHL to remove my name for the international delivery to ICS in Rosenburgstrasse 30, but to whom should I tell them to change? Also what should I do with the attached invoice?

Leave a comment