The 3 Data centre Migration and product re-write Journey
Blacklist filter - Algorithm Hybrid Binary Tree Prefix, Suffix, Wild Card Pattern Matched and LRU lookup algorithm for blacklist filtering the following
Dimensions: RoboName, Module, ErrorNo, TextFilter, RoboName, TextFilter optimized wild card prefix and suffix binary tree search for keeping a constant log(n) look-up time, as more blacklist items are added. About 1 year old interface.
Dynamically detect all PLCS and Robot Configurations
Enhanced Detail Analysis of Robot configuration with SAP integration and tool mapping.
Multiple Plants sharing resources of the single system over a year's or 2 worth of data queried in milliseconds, with additional message process capacity.
After re-write and re-architecture, it fell over at 3000 messages per day and did have all reports time out after 30 seconds.
Furthermore fell over due to latency problems when moving to the new modern data centre, 1ms became 10ms/30ms RTT, diminish enhance turning point optimising MQ process code, to finding the sweat max spot of operation already. Re-wrote to threaded and impedance matcher and prediction fetching, to deal with latency.
Custom Index, into data, not native to a database or plugin to enhance and speed up data retrieval.
Change Logs
Messages Clean up tools, because no one abided by the conventions in all the modules. This allows remapping messages to new message name, under which they would now be aggregated under.
New Feature 'Detail Logs' with Continuous scrolling
up or down for logs, can be filtered by many different parameters, a custom algorithm to outperform traditional database page with offset, limit and KUKA IOT.
Generically support FeNr, FeText, Module and Robots filters.
Features Chronological Browser for Robot Cell debugging, or messages leading up to crash between all robots and PLCS that may have been involved.
Lan status and all other reporting pages, there is an inline popup, which allows a select subset of robots in the report, to view the messages chronologically, for debugging the issues at hand. As technicians now started to look for messages that lead up to the line or cell crash, this would allow them to get a narrow view of all the information quickly and easily.
Many new features:
Filtering with Module
Time to the second filtering, even the same second
FeNr, or FilterText, or Priority.
Duration Reports for the total period in time an alarms was raised during the reporting period
I re-wrote Priority, because the existing design no longer even aligned or worked, the message format changed years ago before I started.
There are pages that allow every error message to be classified at different levels into a priority.
Complex notification system, each user has their own customizable filters similar to blacklist filters and can create notification groups, which have multiple filters in each group, with different aggregation modes down, before a notification is triggered.
Based on Robot, ErrorMessage, leading 4 permutations, as a lot of the time there are sets of notifications only like to get notification on the first occurrence of any of them.
My Notification Menu Item, which is a daily summary of triggered notifications sent out by email to the specific user/operator
Notifications further by design, don't have to be aggregated or mapped messages, they can directly filter the raw message text, to allow one to look for specific errors and log matching conditions, at record level for every record ingested. So if there are 50 different forms of messages being aggregated as 1 message for reporting. Then you can set up notification filters, that can match any of those 50 messages specifically before aggregation to a single message on a per-user daily notification system.
This is done by design to allow for enhanced debugging and warning notification, before a crash or known chain of events leading up to a crash in the details.
Would form the precursor pattern to building on the next Chain of events/AI detection algorithms for failures
Logic implementing all notifications was optimized to the extremes..
1. All user notifications are aggregated down and classified into multiple different types of conditional comparisons taking into account their group rules and wild cards.
2. The least number of unique comparisons are compared with each incoming record.
3. All messages are then back mapped to their original messages before aggregation, to reduce computation load.
4. All messages are aggregated down according there group rules for each user to generate notifications and daily summary page.
5. Email notifications are sent.
Many bug fixes e-works and bug fixes throughout the product.
Akkumelding, for further debugging and clarification on what those Akkumeldungen messages are.
Duration Reports that were implemented last, which proved to be quite challenge to implemented, as I need to changed structure of core reporting to accommodate time and compute the Lead-in and Lead-out remaining time section of a report period. Additional tooltip for each report item, is provide given a clearly break down to certain of time and the uncertain parts, similar to farm track, which parts are rounded up to the end of the day. This because tricky too, as you need to know the ADD/Remove event ordering and take that into account for resume aggregation and in the reporting, becomes and interesting relay event, plus how to deal with in correctly pair messages and events for the best correct reporting outcome. But all communicated in the tool tip.
Deployments pipelines into the 3 different data center the code was being migrated through for integrations and production. Issues with fault network switch in middle data center, was used to then to re-write and test all C# MQ Message processing, while setup and rebuild everything in the next datacenter.
This was after separating and finished the migration into the first data center that was to being sundown of the 3, that had yet to be finished.
An additional who one was setup for the FMO, so there 4 rows of deployments for a couple of months, after which all old server were sundown.
Project Plan:
Reworking in another editor, Grammaly messes and hangs applications.