FAQs

Q: After installation of the component from the Forge, a message appears stating, "There are files missing! Please make sure that IIS user has “read” and “write” permissions on the folder," and nothing can be done with the tool.

A: This happens because Infosistema DMM needs to have write access on a folder in the file system of the machine where it is installed. To solve this error, check that the path in the site property "CustomSettingsPath" exists in the file system and that the IIS user has write privileges on it. The value can be changed in the settings page of the component. If this error is appearing on an OutSystems Cloud environment, then the value of the setting must be cleared and the checkbox "Is Cloud" activated.


Q: I set up a new connection and received the message, "The Connection was tested and failed the test but saved with success."

A: Infosistema DMM connects directly to the database using connection strings and standard .net IDBConnection implementations: System.Data.SqlClient if SQL Server or Oracle.ManagedDataAccess.Client if Oracle. This error is fired when the produced connection string has some invalid property or when there are connectivity issues that prevent Infosistema DMM to connect to the database. To troubleshoot this issue, try to connect directly to the database in other applications, like SQL Server Management Studio or SQL Developer, using the exact same settings. Also, try to reach the database server from the machine where Infosistema DMM is being.

When using the OutSystems Cloud PaaS, although you may reach the databases from your onPrem servers, from the cloud server there is no network to another cloud server, so you will not be able to setup connections this way and will have to use the Runtime Connection. 


Q: When I execute a migration, I receive the following error: "The INSERT permission was denied on the object 'ossys_User', database 'xxx', schema 'xxx'."

A: This means that the connection you're using does not have the correct permissions to execute migrations. Namely, the INSERT permission on the OSSYS_USER table. Database permissions must follow the documentation that can be found in the user manual.

 

Q: When I execute a migration I receive the following error: "Could not set IDENTITY_INSERT in table '[XXX].dbo.[OSUSR_xxx_xxxxxxx]'. Check if connection string has ALTER table permissions."

A: This means that the connection you're using does not have the correct permissions to execute migrations. Namely, the ALTER permission on some or all the OSUSR_* tables. Database permissions must follow the documentation that can be found in the user manual.

If it is not possible to have elevated privileges in the database connections, you must use the Data Append mode.

 

Q: When I execute a migration I receive the following error: "Not implemented mapper for: ossys_XXXXXXXX"

A: Infosistema DMM does not map all the OSSYS tables, just the most common ones. So, you'll see this error if you have an entity that references an unmapped OSSYS table for some reason. Contact Infosistema Support so that we can add the missing mapper to the OSSYS table.

 

Q: When I execute a migration, I receive the following error: "Cannot insert duplicate key row in object 'dbo.OSUSR_XXX_XXXXXXXXX' with unique index 'OSIDX_OSUSR_XXX_XXXXXXXXX'. The duplicate key value is (XXX, XXX, XXX)."

A: This means that under the migration a unique constraint on the destination was raised as DMM tried to write a record that would cause a duplicate record to exist. This can be caused if the destination table already has information and the junction of the existing information with the one being imported has duplicated records where the unique index is concerned. Another way of this error to appear is if the source data is already non-consistent i.e. the source has duplicated records. Finally, it can also be caused if one of the fields of the unique index is a Foreign Key to an OSSYS_ table that has not mapped records. 

 You can try the User Mapped Table feature to overcome this issue.

  

Q: How does OutSystems handle static entities? 

A: Static entities can be looked at as an enumerator on steroids. OutSystems handles them in three distinct tables:

●     OSSYS_Entity, where the entity meta information is saved

●     OSSYS_Entity_Record where the real IDs of the static entities are saved

●     OSUSR_XXX_, the physical table where the values are written to.

The connection is then made between the Data_ID field from OSSYS_Entity_Record and the ID Field of the OSUSR_XXX_.

 As this connection is managed in the eSpace itself, it can produce some dramatic problems if you try to manually change some of the values directly in the database. 

 

Q: What is the Outsystems Metamodel?

A: OutSystems uses an under the hood surrogate model to abstract the entity’s underlying database structure.

This model is responsible for connecting the physical data tables to the application environment.

Pro Tip: If you wish to look at how they are connected, you can go to your database and poke around, look into the OSSYS_Entity and OSSYS_Entity_Attr table and there you'll find references to some familiar names of your entities.

 

Q: How are Foreign Keys Handled in OutSystems Metadata?

A: When one entity has a relationship with another, their supporting physical tables will have Foreign Key constraints applied.

However, OutSystems will also keep track of this relation on their metadata information. If you look into the contents of the ossys_entity_attr, the column 'TYPE' will hold the internal datatype of that attribute.

When the attribute is an FK to another table, instead of a more readable value it will have something that follows the pattern: "bt*."

Extracting the values will give you the SS_KEY value of the corresponding Espace and the SS_KEY value of the corresponding Entity, thus allowing one to seamlessly navigate to the target entity of the relation.

 

Q: What is Infosistema DMM scrambling and anonymization?

A: Infosistema DMM Scrambling and Anonymization is the common name of an engine that can be applied either to data at rest or during a migration scenario.

This engine is capable of changing the information in a non-reversible way and is used mostly to protect sensitive data. The engine has three features that can be used:

●     Anonymization - Simply generates random garbled text without any meaning.

●     Scrambling - Produces random data based on a pattern detection of the original data, while keeping the semantic value of that information. Emails will look like emails, dates will look like dates and so on.

●     Ignoring - Simply removes the information so it won't be accessible anymore.

 

Q: How can Infosistema DMM move data stored in separate DB catalogs?

A: In some OutSystems installation scenarios, the database catalog where entity information is placed is not the same one where OutSystems meta information is located.

In these situations, Infosistema DMM needs to discover where the entity’s physical table is in order to produce its execution plan and commands.

The proper way to do it is looking into the OutSystems metamodel and that's exactly what Infosistema DMM does.

After getting a pointer to the entity and respective eSpace in the OSSYS_ENTITY and OSSYS_ESPACE tables, the DBCatalog_Id column in the OSSYS_ESPACE table will have an FK to the OSSYS_DBCATALOG table.

In environments that use more than one catalog, this table will tell us the name of the other catalog that should be used as a prefix in all subsequent queries or commands against the intended entity. 


Q: What are *OSUSR tables?

A: The OSUSR_* tables are the real entity data tables. Their name is prefixed with a 3 letter code that is unique for each eSpace, so all entities from the same eSpace will be stored in tables with the same prefix.

These are the least disruptive tables in terms of OutSystems operations and are those where manually editing will cause less impact as it is only customer data.


Q: What are OSLOG_* tables?

A: OSLOG* tables are where all the application logs are maintained.

 The reason why there are so many OSLOG_* tables is because logs are kept partitioned in with a round-robin algorithm.


Q: I get the error message "Attribute does not have enough data to be scrambled"

A: To detect a scramble pattern, DMM must have enough records to identify the pattern it will use.  This error message is shown when there is not enough data (records) with information in the selected field to scramble. Usually for most patterns a minimum of 10 records is needed.


Q: I get the error message "Unable to create folder for export files in DMMExport_YYYMMDD_HHMMSS. Assure permissions are granted for the executing user."

A: Guarantee that in the Settings option the folder paths are set as required (see manual "Installation and Setup" page).  

If you are using multiple frontends, this error message, as well as the EULA pop-up message, may appear when the load-balancer directs the call to a server who doesn't have the setup done yet . In this case, try using a direct URL to each server, bypassing the load-balancer, and do the same setup in all of the them.  DMM will work without any problem after the correct setup even when using the URL through the load-balancer.


Q: When doing an Import I get the error message "The page was not displayed because the request entity is too large"

A: The file size error message is from the IIS, and is something configured in the IIS server. Please check for example the link: https://www.inflectra.com/support/knowledgebase/kb306.aspx  


Q: Getting duplicated data on the destination

A: When possible DMM will update information on the destination and not duplicate data. 

However if 2 migrations are made for the same entity and both using Data Append, then the data on that table will be duplicated.  There are 3 exceptions:

  1) If the table has an unique column that can be used has key for the User Mapped Table functionality (and it is configured like that);

  2) If the table is a Static Entity;

  3) If the table is a system table (like OSSYS_USER, for example).

  In these 3 cases the 2nd migration will not insert any value in the destination.

You can try the Delta Migration feature to avoid duplicating data even when using the Runtime Connection.


Q: Detecting DMM pending executions in OutSystems database

A: It is possible to identify if any DMM executions are pending in OutSystems, using the following queries:


/*Get pending processes in DMM*/

select * from ossys_BPM_Event_Queue where Espace_Id= (select id from ossys_Espace where NAME='DMM');


/*Get physical table name of the entities*/

/*This will retrieve 3 records*/

select NAME, PHYSICAL_TABLE_NAME from ossys_Entity where NAME in ('ExecutionQueue','ExecutionNormalQueue','ExecutionParallelQueue') and ESPACE_ID=(select id from ossys_Espace where NAME='DMM_Wrapper') and IS_ACTIVE=1;


/*Get not finished executions*/

/*Use physical name obtained above for 'ExecutionQueue' and replace the table name in the from statement*/

select * from OSUSR_XXX_ExecutionQueue where CURRENTEXECUTIONSTATUSID in (1,2) or CURRENTEXECUTIONSTATUSID is null;

/*Use physical name obtained above for 'ExecutionNormalQueue' and replace the table name in the from statement*/

select * from OSUSR_XXX_ExecutionNormalQueue where CURRENTEXECUTIONSTATUSID in (1,2) or CURRENTEXECUTIONSTATUSID is null;

/*Use physical name obtained above for 'ExecutionParallelQueue' and replace the table name in the from statement*/

select * from OSUSR_XXX_ExecutionParallelQueue where CURRENTEXECUTIONSTATUSID in (1,2) or CURRENTEXECUTIONSTATUSID is null;


Values of CURRENTEXECUTIONSTATUSID =999 are of "Abnormal Error", but it is no longer running.


Q: Message error in Log: "Error on Generic DB Access" followed by in the error detail  "A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.) "

A: This error may appear when connectivity is lost between the server and the database.   Usually redoing the migration will fix the issue, or creating a new migration for the not moved/affected entities. 


Q: Message error in Log: "Error on Generic DB Access" followed by in the error detail  "[...] requires an open and available Connection. The connection's current state is closed. "

A: This error may appear when connectivity is lost between the server and the database.   Usually redoing the migration will fix the issue, or creating a new migration for the not moved/affected entities. 


Q: When I select an espace and entity in a multi tenant application I don’t get any options to select which tenant to delete from, does the Delete function support multi tenancy?

A: The delete functionality in DMM v5.0.5 does not have an interface for you to select tenants. What you can do, knowing the tenant ID, is to use that column and value in the filter option of the Delete functionality so as to delete just those records of a specific tenant you wish.


Q: We use a custom user provider will this impact the delete or copy functions? I notice that the DMM tool uses the “Users” user provider for Authentication rather than Service Center.

A: The user is used just for the UI/OutSystems frontend of DMM. The DMM engine uses the connection to the database, either a direct connection or the Runtime Connection of the platform itself, so the authentication user in DMM frontend has no impact for the copy or delete functionalities.


Q: How does the delete function handle foreign key’s and referential integrity? Is it smart enough to delete table in the correct order to prevent delete key constraints?

A: Yes the Delete functionality is smart enough to execute the deletions in the inverse order so as to respect the referential integrity of the constraints. FYI, the Delete functionality will set FK to null before deleting the PK.


Q: When a migration fails, and we delete the destination data and restart, what happens to BPT we were migrating?

A: DMM v5.3.3 when starting BPT migration will reset the "events tables" for the Entities, so no duplication there if you do multiple migrations - the Entities being triggered by BPT have a directly related Triggered and a corresponding OSEVT entity where all the entity’ events are subscribed.


  In the following BPM tables DMM will append information:

OSSYS_BPM_PROCESS

OSSYS_BPM_PROCESS_INPUT

OSSYS_BPM_PROCESS_OUTPUT

OSSYS_BPM_ACTIVITY

OSSYS_BPM_ACTIVITY_OUTPUT


  For the emails tables DMM makes a merge with the existing information.


Q: Does DMM support SQL Server Authentication only or does it support Windows Authentication as well? 

A: SQL Server authentication only for the connections (or Oracle, or MySQL, depending on the underlying database).


Q: How should I debug a migration where an error happened? 

A:  To troubleshoot an issue, when you look into the DMM logs, search for the keyword "Exception".  If an error occurred, there will be an Exception in the log and the instruction ID where it failed, so you know what operation it was executing, in what entity, etc.. 

       This instruction it was executing in the log appears as a line - like the first line, mentioning:

      ( --------------- Current instruction id: 0 --------------- )

      [...]


Q: My OutSystems platform is having issues executing some DMM migrations with very large binaries in each record, due to low hardware resources available, is there anything I can do? 

A:  You can change the default chunk size (20.000 records) so DMM starts by using a smaller chunk. You can change this in the DMM_Wrapper module, site Property ChunkSize.


Q: My entry Dashboard is not presenting any values, is there anything I can do? 

A:  Access the DMM_Wrapper module, see it the timer Timer_GetDashboardValues is running.


Q: My OutSystems timer got in an invalid state after a timeout during a Migration, is there anything I can do? 

A:  Decrease the timeout of Timer_RunParallelWorker so you don’t have to wait very long after a timeout (pay attention this means you may no longer be able to migrate some data that requires more time to fetch, for example). Set the value of the site property IsToStopExecutionsTimer to True and the migration worker timer stops in the next "chunk" cycle it tries to process - remeber to set it back to false before launching another migration!


Q: What are the timers the component uses, and what is their purpose? 

A:  Here is a list of the timers in the Component, for each module:  

DMM

      GetAppVersion

            On install, it updates the DMM version.

      Timer_LogBuilding

            Creates the executions logs. It doesn't run regularly, it's objective is to be called asynchronously at the end of each execution.

      Timer_RunExecutionParallel

            Execution control timer, called to execute the processes.

      Timer_RunParallelWorker

            Worker timer, calls the DLL.

DMM Wrapper

      AddRuntimeConnection

            On the component installation checks if there already exist a Runtime Connection, if not it is created.

      BackupFiles

            On the DMM installation it does a backup of any configuration files that might exist.

      ClearAllDataExecutionParallel

            For debugging purposes. It is inactive.

      ClearDataExecutionParallel

            Internal maintenance, deletes the execution internal data. It is configured to run at 1am.

      ClearMapperWorker

            Internal maintenance, clears old execution general config data, when a pre-set number of days have passed.

      SetDashboardValues

            Calls the timer "Timer_GetDashboardValues". Runs at 23h30. [removed in v6.1.0]

      SetSettingsFile

            On DMM's installation validates if install settings files exist, if not it creates those files.

      Timer_GetDashboardValues

            Gets the dashboard values of the OutSystems database that are visible when you enter DMM. Set to run at 23:30.

      Timer_MoveToLHBinaryTable

IPA (all inactive by default)

      CheckSLA

            Checks if IPA Transaction Flows are following the SLAs defined, if not sends the notifications.

      GetEnvironmentReportTimer

            Inactive for now. Gets anonymized data from OutSystems entities (number of columns, number of lines, size, etc.). Will be used in the predictive features of IPA. 

      SendToAnalyticsTimer

            Sends the data points collected by the IPA probes to the analytics server. Only non business, non-confidential information is collected, secured and transmitted.


Q: What is the difference in BPT migration if I choose Application Level or Module Level? 

A:   When choosing the Application level, all BPT will be migrated following the rule:

  - [ossys_BPM_Process_Definition].[Espace_Id] matches an espace ID from an application in the migration configuration 

        AND [ossys_BPM_Process_Definition].[Process_Entity_Id] is NULL;

  OR

  - [ossys_BPM_Process_Definition].[Process_Entity_Id] matches the entity ID of an entity in the migration configuration.


  When choosing the Module level, all BPT will be migrated following the rule:

  - [ossys_BPM_Process_Definition].[Process_Entity_Id] matches the entity ID of an entity in the migration configuration.


Q: I'm launching an execution (migration, deletion, export, etc.) but the log just says the "Pre Preparation" part is finished (instruction id: 0) and then nothing else happens?

A: Check the Service Center status and logs of these 3 timers:

  - Timer_RunParallelWorker

  - Timer_RunExecutionParallel

  - Timer_LogBuilding

  Confirm the OutSystems Scheduler Service in the Windows Services app is running.


Q: How does DMM guarantee the encryption of the data in transit during a migration?

A: For direct database connections (SQLServer), there is a flag in the connection configuration in the Settings option, so if the database supports it DMM can use it. For REST type connections, TSL is used.

  With TSL we are using the JDBC connection flag, meaning that the answer depends on the certificate and the database.

  Sharing below a couple links on the subject:

https://docs.microsoft.com/en-us/sql/connect/jdbc/connecting-with-ssl-encryption?view=sql-server-ver15

https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine?view=sql-server-ver15


Q: Does DMM change/transform any data in a migration/export?

A: DMM allows you to configure in the Migration or Export "special actions" to be done on the data record's fields, so that the data in the destination is transformed and is no longer equal to the origin data. Currently (DMM v6) these actions include: scrambling the field (creating new data, not equal to the source data); anonymizing (replacing the field data with a hash string); replace data on that field with a specific value (applicable for all records); ignore the field, copying the rest of the record but inserting NULL in the field.

      Besides these explicit configurations, additionally in the BPT migration functionality, DMM will check if the dateTime is the default "1900-01-01 00:00:00". If it is it inputs it as-is in the destination, if it isn't it will check the source and destination timezones and convert the data accordingly.


Q: Flag SynchronizeOSSYSInactiveEntries to ignore inactive OSSYS, what is it's behavior?

A: In the following situation:


Q: The OutSystems timer "Timer_RunParallelWorker" shows up in Service Center as executing, but it doesn't show in the logs, neither Service Center nor DMM logs, and no new executions in DMM run past the pre-preparation stage?

A: A problem happened in the OutSystems environment, launching the timer, in that it didn't launch the timer but it was marked as having launched in the OutSystems platform system database. 

If you are in a OutSystems PaaS environment, you'll have to open a ticket in OutSystems for their support to check and solve the issue. 

If you have direct access to the OutSystems database system tables, you can set the timer's state has not running with the following instruction: update ossys_Cyclic_Job_Shared set IS_RUNNING_SINCE = '' where META_CYCLIC_JOB_ID = (select ID FROM ossys_Meta_Cyclic_Job where name = 'Timer_RunParallelWorker') 


Q: During a DMM migration lots of processes are launched due to triggers "on creation" of BPT processes. 

You can follow (or adapt the below as you need) procedure to workaround to avoid communication BPTs being executed:


Q: When I install the DMM I get the error message: “Unknown Feature: The feature Custom Tool Name on Login Screen doesn’t exist. Upgrade Platform Server and retry.” 

This component is used to allow integration with external authentication providers but is only available from OutSystems version 11.14.0 or higher.

If you have a previous version and you don’t need external authentication, you can disable this component. To do this, go to the DMM application, in UI Flows -> Common -> No Permission -> Preparation and disable the components: User_GetUnifiedLoginUrl, If component and Common\ExternalURL


Q: Is there a Delta/Incremental feature for the Export to Database?

  DMM's feature of "Export to Database" currently works with a driver for SQLServer, meaning you can use this DMM feature to export data into an Azure SQL which uses the same driver. DMM will replicate the structured relational tables (each table representing an OutSystems business entity you are exporting), creating and adapting automatically the tables so as to store the OutSystems entity fields. 

  DMM will require a direct database connection with enough privileges to both create/change the tables and copy data by ID (Alter Table privileges). With these Alter Table privileges, in each Export execution DMM will match the record IDs of the OutSystems source with those in the destination, and update the records as needed - hence you already by default get a Delta in that all changes in the source records (except deletion of the record itself) are replicated to the destination in the next export execution. Since this is already a current capability of DMM, we don't have in the product backlog to have a "delta feature" in the "Export to Database".


Q: Can DMM be used for Bulk Import of data into OutSystems?

  Yes. It's not a direct process like the Migration, you can see details of the Bulk Import process in this article which describes how DMM can help in that challenge!

  You can also see in the 3rd module of our DMM Foundation free online course an example of how that data import from external sources works. 

  Finally, in the DMM manual Import feature you can also see some details of things to take into account when using the feature.


  This feature (or process, since it's not a single feature), as explained in the article above, is currently free to use in DMM - your client will not need to acquire a paid subscription and can use DMM from the OutSystems Forge as-is to execute the process. 

  We recommend acquiring a paid subscription to have access to our product support, of course, but you work what's best for you and your client.


Q: Is there a way to cleanup / delete DMM's execution history and log data?

  DMM has some processes to remove old data. You can search for the timers ClearDataExecutionParallel and ClearMapperWorker and check if they are scheduled to run.You can also manually run them and check if there are some errors in the Service Center Error Logs.

  Also, you can check what are the values for the site properties DaysToClearData and DaysToClearMapper in the module DMM_Wrapper, since these are used to define which execution data should be cleaned up.

  Log data can be delete in the Execution History list (there's a specific button on each execution to delete log data) and for the Delta/Incremental executions, the memory of the mappings can be deleted in the Settings menu, in the option Delta History.


Q: I Cannot access some DMM components, and I get an error in the Service Center logs saying "Common Language Runtime detected an invalid program"

  There was a problem with the installation of the application. Directly execute the Publish operation through the Service Center on the DMM Module.


Q: I updated an entity and the changes are not shown in DMM?

  DMM uses a cache to keep the entity metadata. You need to change the Site Property CacheMinutesToKeepAlive in the Module DMM_Wrapper. The default value is 60 minutes. You can change it to 1 so DMM only will keep the information for 1 minute and will load the OutSystems metadata.