A: This happens because Infosistema DMM needs to have write access on a folder in the file system of the machine where it is installed. To solve this error, check that the path in the site property "CustomSettingsPath" exists in the file system and that the IIS user has write privileges on it. The value can be changed in the settings page of the component. If this error is appearing on an OutSystems Cloud environment, then the value of the setting must be cleared and the checkbox "Is Cloud" activated.
A: Infosistema DMM connects directly to the database using connection strings and standard .net IDBConnection implementations: System.Data.SqlClient if SQL Server or Oracle.ManagedDataAccess.Client if Oracle. This error is fired when the produced connection string has some invalid property or when there are connectivity issues that prevent Infosistema DMM to connect to the database. To troubleshoot this issue, try to connect directly to the database in other applications, like SQL Server Management Studio or SQL Developer, using the exact same settings. Also, try to reach the database server from the machine where Infosistema DMM is being. In Service Center, go to Administration > Database Connections and try to create here the connection.
When using the OutSystems Cloud PaaS, although you may reach the databases from your onPrem servers, from the cloud server there is no network to another cloud server, so you will not be able to setup connections this way and will have to use the Runtime Connection.
A: This means that the connection you're using does not have the correct permissions to execute migrations. Namely, the INSERT permission on the OSSYS_USER table. Database permissions must follow the documentation that can be found in the user manual.
A: This means that the connection you're using does not have the correct permissions to execute migrations. Namely, the ALTER permission on some or all the OSUSR_* tables. Database permissions must follow the documentation that can be found in the user manual.
If it is not possible to have elevated privileges in the database connections, you must use the Data Append mode.
A: Infosistema DMM does not map all the OSSYS tables, just the most common ones. So, you'll see this error if you have an entity that references an unmapped OSSYS table for some reason. Contact Infosistema Support so that we can add the missing mapper to the OSSYS table.
A: This means that under the migration a unique constraint on the destination was raised as DMM tried to write a record that would cause a duplicate record to exist. This can be caused if the destination table already has information and the junction of the existing information with the one being imported has duplicated records where the unique index is concerned. Another way of this error to appear is if the source data is already non-consistent i.e. the source has duplicated records. Finally, it can also be caused if one of the fields of the unique index is a Foreign Key to an OSSYS_ table that has not mapped records.
You can try the User Mapped Table feature to overcome this issue.
A: Static entities can be looked at as an enumerator on steroids. OutSystems handles them in three distinct tables:
● OSSYS_Entity, where the entity meta information is saved
● OSSYS_Entity_Record where the real IDs of the static entities are saved
● OSUSR_XXX_, the physical table where the values are written to.
The connection is then made between the Data_ID field from OSSYS_Entity_Record and the ID Field of the OSUSR_XXX_.
As this connection is managed in the eSpace itself, it can produce some dramatic problems if you try to manually change some of the values directly in the database.
A: OutSystems uses an under the hood surrogate model to abstract the entity’s underlying database structure.
This model is responsible for connecting the physical data tables to the application environment.
Pro Tip: If you wish to look at how they are connected, you can go to your database and poke around, look into the OSSYS_Entity and OSSYS_Entity_Attr table and there you'll find references to some familiar names of your entities.
A: When one entity has a relationship with another, their supporting physical tables will have Foreign Key constraints applied.
However, OutSystems will also keep track of this relation on their metadata information. If you look into the contents of the ossys_entity_attr, the column 'TYPE' will hold the internal datatype of that attribute.
When the attribute is an FK to another table, instead of a more readable value it will have something that follows the pattern: "bt*."
Extracting the values will give you the SS_KEY value of the corresponding Espace and the SS_KEY value of the corresponding Entity, thus allowing one to seamlessly navigate to the target entity of the relation.
A: Infosistema DMM Scrambling and Anonymization is the common name of an engine that can be applied either to data at rest or during a migration scenario.
This engine is capable of changing the information in a non-reversible way and is used mostly to protect sensitive data. The engine has three features that can be used:
● Anonymization - Simply generates random garbled text without any meaning.
● Scrambling - Produces random data based on a pattern detection of the original data, while keeping the semantic value of that information. Emails will look like emails, dates will look like dates and so on.
● Ignoring - Simply removes the information so it won't be accessible anymore.
A: In some OutSystems installation scenarios, the database catalog where entity information is placed is not the same one where OutSystems meta information is located.
In these situations, Infosistema DMM needs to discover where the entity’s physical table is in order to produce its execution plan and commands.
The proper way to do it is looking into the OutSystems metamodel and that's exactly what Infosistema DMM does.
After getting a pointer to the entity and respective eSpace in the OSSYS_ENTITY and OSSYS_ESPACE tables, the DBCatalog_Id column in the OSSYS_ESPACE table will have an FK to the OSSYS_DBCATALOG table.
In environments that use more than one catalog, this table will tell us the name of the other catalog that should be used as a prefix in all subsequent queries or commands against the intended entity.
A: The OSUSR_* tables are the real entity data tables. Their name is prefixed with a 3 letter code that is unique for each eSpace, so all entities from the same eSpace will be stored in tables with the same prefix.
These are the least disruptive tables in terms of OutSystems operations and are those where manually editing will cause less impact as it is only customer data.
A: OSLOG* tables are where all the application logs are maintained.
The reason why there are so many OSLOG_* tables is because logs are kept partitioned in with a round-robin algorithm.
A: To detect a scramble pattern, DMM must have enough records to identify the pattern it will use. This error message is shown when there is not enough data (records) with information in the selected field to scramble. Usually for most patterns a minimum of 10 records is needed.
A: Guarantee that in the Settings option the folder paths are set as required (see manual "Installation and Setup" page).
If you are using multiple frontends, this error message, as well as the EULA pop-up message, may appear when the load-balancer directs the call to a server who doesn't have the setup done yet . In this case, try using a direct URL to each server, bypassing the load-balancer, and do the same setup in all of the them. DMM will work without any problem after the correct setup even when using the URL through the load-balancer.
A: The file size error message is from the IIS, and is something configured in the IIS server. Please check for example the link: https://www.inflectra.com/support/knowledgebase/kb306.aspx
A: When possible DMM will update information on the destination and not duplicate data.
However if 2 migrations are made for the same entity and both using Data Append, then the data on that table will be duplicated. There are 3 exceptions:
1) If the table has an unique column that can be used has key for the User Mapped Table functionality (and it is configured like that);
2) If the table is a Static Entity;
3) If the table is a system table (like OSSYS_USER, for example).
In these 3 cases the 2nd migration will not insert any value in the destination.
You can try the Delta Migration feature to avoid duplicating data even when using the Runtime Connection.
A: It is possible to identify if any DMM executions are pending in OutSystems, using the following queries:
/*Get pending processes in DMM*/
select * from ossys_BPM_Event_Queue where Espace_Id= (select id from ossys_Espace where NAME='DMM');
/*Get physical table name of the entities*/
/*This will retrieve 3 records*/
select NAME, PHYSICAL_TABLE_NAME from ossys_Entity where NAME in ('ExecutionQueue','ExecutionNormalQueue','ExecutionParallelQueue') and ESPACE_ID=(select id from ossys_Espace where NAME='DMM_Wrapper') and IS_ACTIVE=1;
/*Get not finished executions*/
/*Use physical name obtained above for 'ExecutionQueue' and replace the table name in the from statement*/
select * from OSUSR_XXX_ExecutionQueue where CURRENTEXECUTIONSTATUSID in (1,2) or CURRENTEXECUTIONSTATUSID is null;
/*Use physical name obtained above for 'ExecutionNormalQueue' and replace the table name in the from statement*/
select * from OSUSR_XXX_ExecutionNormalQueue where CURRENTEXECUTIONSTATUSID in (1,2) or CURRENTEXECUTIONSTATUSID is null;
/*Use physical name obtained above for 'ExecutionParallelQueue' and replace the table name in the from statement*/
select * from OSUSR_XXX_ExecutionParallelQueue where CURRENTEXECUTIONSTATUSID in (1,2) or CURRENTEXECUTIONSTATUSID is null;
Values of CURRENTEXECUTIONSTATUSID =999 are of "Abnormal Error", but it is no longer running.
A: This error may appear when connectivity is lost between the server and the database. Usually redoing the migration will fix the issue, or creating a new migration for the not moved/affected entities.
A: This error may appear when connectivity is lost between the server and the database. Usually redoing the migration will fix the issue, or creating a new migration for the not moved/affected entities.
A: The delete functionality in DMM v5.0.5 does not have an interface for you to select tenants. What you can do, knowing the tenant ID, is to use that column and value in the filter option of the Delete functionality so as to delete just those records of a specific tenant you wish.
A: The user is used just for the UI/OutSystems frontend of DMM. The DMM engine uses the connection to the database, either a direct connection or the Runtime Connection of the platform itself, so the authentication user in DMM frontend has no impact for the copy or delete functionalities.
A: Yes the Delete functionality is smart enough to execute the deletions in the inverse order so as to respect the referential integrity of the constraints. FYI, the Delete functionality will set FK to null before deleting the PK.
A: DMM v5.3.3 when starting BPT migration will reset the "events tables" for the Entities, so no duplication there if you do multiple migrations - the Entities being triggered by BPT have a directly related Triggered and a corresponding OSEVT entity where all the entity’ events are subscribed.
In the following BPM tables DMM will append information:
OSSYS_BPM_PROCESS
OSSYS_BPM_PROCESS_INPUT
OSSYS_BPM_PROCESS_OUTPUT
OSSYS_BPM_ACTIVITY
OSSYS_BPM_ACTIVITY_OUTPUT
For the emails tables DMM makes a merge with the existing information.
A: SQL Server authentication only for the connections (or Oracle, or MySQL, depending on the underlying database).
A: To troubleshoot an issue, when you look into the DMM logs, search for the keyword "Exception". If an error occurred, there will be an Exception in the log and the instruction ID where it failed, so you know what operation it was executing, in what entity, etc..
This instruction it was executing in the log appears as a line - like the first line, mentioning:
( --------------- Current instruction id: 0 --------------- )
[...]
A: You can change the default chunk size (20.000 records) so DMM starts by using a smaller chunk. You can change this in the DMM_Wrapper module, site Property ChunkSize.
A: Access the DMM_Wrapper module, see it the timer Timer_GetDashboardValues is running.
A: Decrease the timeout of Timer_RunParallelWorker so you don’t have to wait very long after a timeout (pay attention this means you may no longer be able to migrate some data that requires more time to fetch, for example). Set the value of the site property IsToStopExecutionsTimer to True and the migration worker timer stops in the next "chunk" cycle it tries to process - remeber to set it back to false before launching another migration!
A: Here is a list of the timers in the Component, for each module:
DMM
GetAppVersion
On install, it updates the DMM version.
Timer_LogBuilding
Creates the executions logs. It doesn't run regularly, it's objective is to be called asynchronously at the end of each execution.
Timer_RunExecutionParallel
Execution control timer, called to execute the processes.
Timer_RunParallelWorker
Worker timer, calls the DLL.
DMM Wrapper
AddRuntimeConnection
On the component installation checks if there already exist a Runtime Connection, if not it is created.
BackupFiles
On the DMM installation it does a backup of any configuration files that might exist.
ClearAllDataExecutionParallel
For debugging purposes. It is inactive.
ClearDataExecutionParallel
Internal maintenance, deletes the execution internal data. It is configured to run at 1am.
ClearMapperWorker
Internal maintenance, clears old execution general config data, when a pre-set number of days have passed.
SetDashboardValues
Calls the timer "Timer_GetDashboardValues". Runs at 23h30. [removed in v6.1.0]
SetSettingsFile
On DMM's installation validates if install settings files exist, if not it creates those files.
Timer_GetDashboardValues
Gets the dashboard values of the OutSystems database that are visible when you enter DMM. Set to run at 23:30.
Timer_MoveToLHBinaryTable
IPA (all inactive by default)
CheckSLA
Checks if IPA Transaction Flows are following the SLAs defined, if not sends the notifications.
GetEnvironmentReportTimer
Inactive for now. Gets anonymized data from OutSystems entities (number of columns, number of lines, size, etc.). Will be used in the predictive features of IPA.
SendToAnalyticsTimer
Sends the data points collected by the IPA probes to the analytics server. Only non business, non-confidential information is collected, secured and transmitted.
A: When choosing the Application level, all BPT will be migrated following the rule:
- [ossys_BPM_Process_Definition].[Espace_Id] matches an espace ID from an application in the migration configuration
AND [ossys_BPM_Process_Definition].[Process_Entity_Id] is NULL;
OR
- [ossys_BPM_Process_Definition].[Process_Entity_Id] matches the entity ID of an entity in the migration configuration.
When choosing the Module level, all BPT will be migrated following the rule:
- [ossys_BPM_Process_Definition].[Process_Entity_Id] matches the entity ID of an entity in the migration configuration.
A: Check the Service Center status and logs of these 3 timers:
- Timer_RunParallelWorker
- Timer_RunExecutionParallel
- Timer_LogBuilding
Confirm the OutSystems Scheduler Service in the Windows Services app is running.
A: For direct database connections (SQLServer), there is a flag in the connection configuration in the Settings option, so if the database supports it DMM can use it. For REST type connections, TSL is used.
With TSL we are using the JDBC connection flag, meaning that the answer depends on the certificate and the database.
Sharing below a couple links on the subject:
A: DMM allows you to configure in the Migration or Export "special actions" to be done on the data record's fields, so that the data in the destination is transformed and is no longer equal to the origin data. Currently (DMM v6) these actions include: scrambling the field (creating new data, not equal to the source data); anonymizing (replacing the field data with a hash string); replace data on that field with a specific value (applicable for all records); ignore the field, copying the rest of the record but inserting NULL in the field.
Besides these explicit configurations, additionally in the BPT migration functionality, DMM will check if the dateTime is the default "1900-01-01 00:00:00". If it is it inputs it as-is in the destination, if it isn't it will check the source and destination timezones and convert the data accordingly.
A: In the following situation:
Table with a PK that is a FK to an OSSYS table;
The OSSYS table has inactive and active records;
By default DMM doesn’t migrate the table records if the PK is pointing to a OSSYS table record that is inactive, and shows an error not migrating any records (it would try to insert null in the PK and no records migrated, even the ones that there are active records in the OSSYS).
This flag SynchronizeOSSYSInactiveEntries will allow the records that are “right” (the FK exists in the OSSYS table) to be migrated.
The flag by default is at FALSE.
A: A problem happened in the OutSystems environment, launching the timer, in that it didn't launch the timer but it was marked as having launched in the OutSystems platform system database.
If you are in a OutSystems PaaS environment, you'll have to open a ticket in OutSystems for their support to check and solve the issue.
If you have direct access to the OutSystems database system tables, you can set the timer's state has not running with the following instruction: update ossys_Cyclic_Job_Shared set IS_RUNNING_SINCE = '' where META_CYCLIC_JOB_ID = (select ID FROM ossys_Meta_Cyclic_Job where name = 'Timer_RunParallelWorker')
Lock the BPT process before migrating -> tasks will be created on event queue but process will not be launched due to lock
Deactivate the timer for sending reminder communications as this will also launch BTP processes with the newly migrated data (which we also want to avoid to be absolutely safe)
Migrate the data
Clean the event queue so no processes will be launched (visible on activity instance queue) when unlocking BPT process
Unlock the BPT process so communications can be sent again
DMM's feature of "Export to Database" currently works with a driver for SQLServer, meaning you can use this DMM feature to export data into an Azure SQL which uses the same driver. DMM will replicate the structured relational tables (each table representing an OutSystems business entity you are exporting), creating and adapting automatically the tables so as to store the OutSystems entity fields.
DMM will require a direct database connection with enough privileges to both create/change the tables and copy data by ID (Alter Table privileges). With these Alter Table privileges, in each Export execution DMM will match the record IDs of the OutSystems source with those in the destination, and update the records as needed - hence you already by default get a Delta in that all changes in the source records (except deletion of the record itself) are replicated to the destination in the next export execution. Since this is already a current capability of DMM, we don't have in the product backlog to have a "delta feature" in the "Export to Database".
Yes. It's not a direct process like the Migration, you can see details of the Bulk Import process in this article which describes how DMM can help in that challenge!
You can also see in the 3rd module of our DMM Foundation free online course an example of how that data import from external sources works.
Finally, in the DMM manual Import feature you can also see some details of things to take into account when using the feature.
This feature (or process, since it's not a single feature), as explained in the article above, is currently free to use in DMM - your client will not need to acquire a paid subscription and can use DMM from the OutSystems Forge as-is to execute the process.
We recommend acquiring a paid subscription to have access to our product support, of course, but you work what's best for you and your client.
DMM has some processes to remove old data. You can search for the timers ClearDataExecutionParallel and ClearMapperWorker and check if they are scheduled to run.You can also manually run them and check if there are some errors in the Service Center Error Logs.
Also, you can check what are the values for the site properties DaysToClearData and DaysToClearMapper in the module DMM_Wrapper, since these are used to define which execution data should be cleaned up.
Log data can be delete in the Execution History list (there's a specific button on each execution to delete log data) and for the Delta/Incremental executions, the memory of the mappings can be deleted in the Settings menu, in the option Delta History.
There was a problem with the installation of the application. Directly execute the Publish operation through the Service Center on the DMM Module.
DMM uses a cache to keep the entity metadata. You need to change the Site Property CacheMinutesToKeepAlive in the Module DMM_Wrapper. The default value is 60 minutes. You can change it to 1 so DMM only will keep the information for 1 minute and will load the OutSystems metadata.
At the end of the execution, DMM picks up the log records stored in internal tables and creates the log file that is available for download. It may happen that an issue aborts the execution process in this step (for example, a memory issue in the infrastructure) and the log file is not created.
The log records are still in the DMM's internal log entity, hence it is possible to access them - but the log file wasn't generated because the process died before it could create the log file for that execution. Currently DMM does not have that log file creation process decoupled from the execution process in a way that clients could run just that last part of the process, so the only way to access the log records will be directly getting them from the database.
To get the records from the database you will need to:
Find the export history ID for that specific execution. In the DMM Execution Status Page, put the mouse over the export execution and in the URL find the History ID (see image below);
Replace the ID found in the previous step in the following query (replacing also the table names in brackets {} to the corresponding physical table names in your specific DMM installation) and execute it:
SELECT eqpl.TextLog
FROM {ExecutionQueueParallelLogs}
eqpl INNER JOIN {ExecutionQueueParallel} eqp ON eqpl.ExecutionQueueParallelId = eqp.Id
WHERE eqp.MainExecutionTrackerId =
(SELECT MainExecutionTrackerId
FROM {LogHistory}
WHERE Id = <ID_IN_URL>
)
ORDER BY eqpl.Id
The query gives all the DMM export log lines for this execution.
When you see the above error message, it means the validation process is unable to confirm the timer is working. Your next action should be to follow the instructions and check if the timer is running in Service Center. If it is, then you can ignore the DMM message. If it is not, you can start it manually in Service Center, and DMM should recover automatically the execution where it previously stopped and continue.
Download the configuration and verify if the fields related to connections (which may be one of the following attributes: SourceDBConnection, DestinationDBConnection, or DBConnection) are correctly set.
The names of these fields must match the connections configured in the environment, under Management > Settings.
If the query used by DMM to retrieve data is timing out, even with a small chunk, you can increase the DBCommandTimeoutSeconds to allow the query more time to execute.
Access the DMM_Wrapper Module in the environment > Locate the Site Property DBCommandTimeoutSeconds > Increase the value (for example, set it to 120 seconds) > Save the changes.
By increasing this value, the query should have enough time to complete successfully, and the export will run without issues.
If the timeout occurs, DMM will split the chunk and try the operation again later.
If the export still doesn't complete, check if the process was manually stopped on the Execution Status page (Stop Export).
If you are activating one of the new licenses from the current pricing structure on an older version of DMM, this version will not recognize the license correctly. To resolve this issue, download the latest version of DMM from the OutSystems Forge and try activating the license again.
The "Entity table size" test does not validate the number of records but rather the storage size occupied in the database.
The test that counts the number of records is "Entity table records number." You can find more details about this test in our online documentation.
In the executed comparison, the record count test completed successfully:
09:38:02.048 : DataComparison, "1) Entity table records number" with Result: Success, took 0 Seconds
The "Entity table size" test may return different values even when the records are identical due to factors such as indexes, fragmentation, page allocation, and other internal database elements. These can lead to discrepancies between the source and target.