What Technical Steps are Used to Move Customer Data into the Trimble Viewpoint ERP Cloud and How is That Process Controlled?
Author: Eric Vasbinder
ERPs Applicable:
Vista
Oftentimes, as customers either evaluate or begin to execute a process of moving into the Trimble Viewpoint ERP Cloud, they would like to be familiar at a high level with those technical processes for how their data is moved into our cloud. In addition, they would like to be acquainted with the technical and procedural controls that ensure proper and limited access to their data as it moves into the cloud and how those controls can ensure that no modifications are made to financial data during the cloud move.
When moving data from on premise into the cloud, there are several technologies that are used to enable the proper movement of that data:
Microsoft SQL Server Backups and Restores
Microsoft SQL Server Management Studio (SSMS)
Microsoft NTFS File System Copy
Microsoft or 7Zip or WinZip Compression
Microsoft Azure Storage Exporer
Microsoft Azure TLS Data Transport
Each time customer data is moved from there on premise environments into their cloud instance, there are a high level series of steps that are replicated each time. The following are, at a high level the steps that are used to move data into the cloud:
The customer creates full, compressed backup of the following databases in .BAK file format: Viewpoint, KDS–HRIM, VPAttachments (if used), ReportServer, ReportServerTempDB
Technologies used:
Microsoft SQL Server Full Database backups
Microsoft SQL Server Backup Compression
Microsoft SQL Server Management Studio (SSMS)
The customer uses their preferred compression tool to create a compressed, ZIP file format, backup (i.e. "Zip file") of the Vista Viewpoint Repository folder, containing custom reports, templates, etc.
Technologies Used:
Microsoft NTFS File System Copy
Compression Tool of Customer's choice (usually Microsoft NTFS ZIP, 7Zip, or WinZip)
If the customer uses attachments in the file system (a.k.a. "file system attachments") instead of the VPAttachments database on-premise, the customer will use their preferred compression tool to create a compressed, ZIP file format, backup (i.e. "Zip file") of their file system attachments parent folder, containing all attachments used by Vista. This may include, but is not limited to AP invoices, pay stubs, payment receipts, and tax records, among others.
Technologies Used:
Microsoft NTFS File System Copy
Compression Tool of Customer's choice (usually Microsoft NTFS ZIP, 7Zip, or WinZip)
Customer uses Microsoft Azure Storage Explorer to upload all .BAK database file backups, as well as the zip file containing the Viewpoint Repository folder, and the zip file containing the file system attachments (if needed), to their cloud blob storage container
Technologies Used:
Microsoft Azure Storage Explorer
Microsoft Azure TLS Data Transport
Customer notifies Trimble Viewpoint of data upload completion.
Technologies Used:
Microsoft Azure Storage Explorer
Trimble Viewpoint team restores SQL backups to cloud Vista server. If restore fails due to checksum error or file corruption, request is made to customer to repeat backup and upload process.
Technologies Used:
Microsoft SQL Server
Microsoft SSMS
Email (if request to reupload needed)
Trimble Viewpoint team restores "Viewpoint Repository" folder to cloud server. If restore fails due to checksum error or file corruption, request is made to customer to repeat backup and upload process.
Technologies Used:
Microsoft NTFS File Copy
Microsoft NTFS Zip or 7Zip (Engineer preference)
If attachments were stored directly on the file system when Vista was used on-premise, the Trimble Viewpoint team will uncompress the file system attachments zip file and then use automated SQL scripts to move those attachments to being stored in the VPAttachments database for the customer's environment.
Technologies Used:
Microsoft NTFS File Copy
Microsoft NTFS Zip or 7Zip (Engineer preference)
Microsoft SQL Server
During the process of moving data from on premise into our cloud, the Trimble Viewpoint team leverages existing, well proven, Microsoft technologies and solutions to ensure that data is not changed in transit. As mentioned above, we leverage solutions such as Microsoft SQL Server built-in compression and backup as well as Microsoft Azure Storage Explorer, and industry-standard compression and secure transport layer technologies.
The benefit of using these compression and secure transport technologies as part of our standard process is that any data loss or modification in transit will result in the inability to uncompress the resulting file; the files will be corrupt and unable to be used by any restoration process.
In the unlikely event that the data cannot be restored from an uploaded backup, a request will be made to the customer to repeat the backup and upload process.
In the event that Microsoft's standard, well proven technologies are insufficient to ensure the validity and completeness of uploaded data, there are several other methods that can be optionally used upon customer request to provide additional layers of data integrity validation:
NOTE: The following methods are optional, and require that the data uploaded into the cloud already be successfully restored into the cloud environment
Key Report Comparisons:
the customer may take several key reports from within Vista, such as GL, AP, AR, etc. and compare the totals for a specific set of same time periods on-premise with the same time periods in the cloud
Table Row Comparisons:
The customer may use SQL queries to determine the number of table rows that exist in the cloud server for various tables of their choice and compare that total number of rows with the total number that exists in the backup previously made on premise
This comparison must be made exactly between the SQL backup previously made and the cloud server prior to any changes being made in the cloud server
After the Initial Data Upload (IDU), but prior to the go-live cut over, it would be an invalid technique to compare the existing on prem production server with the data as uploaded into the cloud: the production server on premise, prior to the go live cut over, will be changing after the initial data upload
In other words, using SQL to compare a moving target with a previously uploaded backup would be useless
Attachment Number Loaded Comparisions:
During the Initial and Final Data Uploads and Restores, the Trimble Viewpoint team will, if File System Attachments are being converted to VPAttachments database, produce a list of the number of attachments stored in VPAttachments vs. the number of attachments that Vista is expecting to be present
If a discrepancy is found, the customer will be contacted and an opportunity to manually upload any missing attachments will be provided; the customer and Trimble Viewpoint teams will coordinate to ensure that the total attachments in the cloud and on-premise are materially the same
During the process of uploading data into cloud, we endeavor to change very few items within the customer's database. In particular, we make no changes to financial records, totals, operational data, etc. The only items that are modified during the cloud upload have to do with changes to facilitate user authentication (SSO), updates to custom reports to ensure they can connect to the cloud server,
The following items are changed to allow for proper functioning in the cloud:
Custom SSRS Reports
Changed to Update Data Source Names (DSN) to allow for queries to work, now that the server name and IP is different
Custom Crystal Reports
Modified to change DSN and connection method from ADO/OLEDB to RDO/ODBC to be cloud compatible
When turning on user authentication, the usernames of Vista users from on-premise must be changed, along with the username references to those users in audit tables, permissions, etc. Specifically, usernames of end users are changed after SSO is turned on, changing them from a format on-premise that is most likely tied to their on-premise AD domains to one that is specific to our Trimble ID SSO: "emailaddress@companyDomain_enterpriseID"
For example, user "Joe Masterson", whose email on-premise is "joe.masterson@alphaconcrete.com" has the username of "ALPHACONCRETE\jmasterson" on-premise. The Team platform enterprise ID for Alpha Concrete is "39873". When moving to the cloud, his username is renamed to "joe.masterson@alphaconcrete.com_39873"
When renamed, all records that refer to that specific username, such as the HQ Master Audit table, are changed to refer to his username
The actual records of activities do NOT change, nor do time and date stamps of previous activities. User access right, privileges, etc. do NOT change
The ONLY thing that changes is the name of the user's record
If the customer is using file system attachments on-premise, those attachments will need to be stored in the VPAttachments database after moving to the cloud. This process involves uncompressing the zip file that contains the customers attachments as compressed directly from the file system on-premise, then using SQL scripts to store those attachments in Vista's dedicated VPAttachments database. The VPAttachments database is purpose designed to store attachments for our Document Management module and using it in the cloud provides several manageability benefits as compared to using file system attachments.
The scripts that store these attachments in the database do not make changes to any file system data or metadata; they merely tell SQL server to store the attachments into the VPAttachments database and inform Vista's Document Management system of the new location pointer for the files, updating Vista from pointing to the file system to pointing to the new storage location in the VPAttachments database.
There is NO option to retain the use of File System attachments in our cloud; all attachments for Vista must be stored in the VPAttachments database unless a third party document management system (e.g. OnBase or pVault) is in use.
Once uploaded into the Trimble Viewpoint cloud, our teams use the principle of "Least Privilege" to ensure that only those personnel who have a defined need to access customer data for legitimate purposes may do so. These purposes may include, but are not limited to backups, restores, and troubleshooting.
For more information on the technical and procedural controls that we used to limit access to customer data, please reach out to our Trimble Viewpoint security team.
Due to Trimble Viewpoint's focus on utilizing Microsoft Azure regions, customer data, once uploaded into the Trimble Viewpoint cloud, remains within the data center and geographical region within which it is uploaded. In other words, when a customer uploads their data to the blob storage container associated with their future production cloud environment, that blob storage container will be resident within the region wherein their environment will be stood up.
For example, if a customer's eventual cloud environment in production will be running in the West US datacenter region, their data will be uploaded into a blob storage container within that West US region. During the restoration and processing of that data, the data never physically leaves that region.
Please note that the control commands to execute certain automation scripts may be issued from outside that region, but the data itself does not egress that particular Azure region.
This helps to ensure that geographical data residency requirements are kept.
The Trimble Viewpoint transformation factory model is designed to preserve the uploaded on-premise data for the period of time of the go-live in the cloud and for a short period thereafter, specifically 90 days post go-live cut over. Our systems are not designed to retain the final uploaded sets of data from on-premise indefinitely.
To that end, we highly recommend that customers, if they have any concerns about that data being lost, retain a copy of those backup files themselves, controlled under their own backup retention policies.
The following items in from an on-premise Vista installation are not migrated into the cloud by default as they are not always used by all customers. To ensure that the totality of a customer's environment as appropriate is moved into the Trimble Viewpoint cloud, the customer should reach out with a manual transformation support case to request the following items be moved into their cloud instance:
SQL Agent Jobs
export via SQL scripts and upload into blob storage
Linked Servers
recreate in an external SQL database and point to your new Vista database over an IPSEC VPN or TLS Database Endpoint (TLS VPN)
SQL Service Account Logins
Either recreate using the steps located here (https://sites.google.com/trimble.com/vista-cloud-faq/home/integration-technology/creating-sql-accounts) or Create a transfmration support case to have them recreated
SSIS Packages
Export and upload into blob storage as needed
Configservice DB
Please note the following may be differences between how the Trimble Viewpoint cloud infrastructure is configured vs. a standard on-premise installation:
SQL RCSI (Read Commit Snapshot Isolation) is turned on in the cloud to assist with performance
SQL Server version may be newer than customers are using on-premise
Database backups and recovery models could be different (details here: Disaster Recovery, Business Continuity, and Availability Risk)
changelog
Friday, 28 June 2024 at 09:27AM:
Added information about file system attachments, items not copied over by default, infrastructure changes, and data retention for on-premise backups
Thursday, 27 June 2024 at 05:02PM:
Initial posting