Implement ETL and data movement solutions using Azure Data Factory
Experience managing Azure Data Lakes and Data Lake Analytics and an understanding of how to integrate with other Azure Services. Knowledge of MSSQL and how it can be used for data transformation as part of a cloud data integration strategy
Extract Transform and Load data from sources systems to Azure Data Storage services using a combination of Azure Data factory, Spark SQL, and Azure Data Lake Analytics. Data ingestion to one or more Azure services (Azure Data Lake, Azure Blob Storage)
Creating Data factory pipelines that can bulk copy multiple tables at once from relational database to Azure datalake gen2.
Developed and managed end-to-end data pipelines using Azure Data Factory to support various business applications.
Implemented data transformation and cleansing processes to ensure high data quality
Configured Azure Data Factory integration runtimes to securely connect to on-premises data sources.
Role and Collaboration
I led the development and management of data pipelines in Azure, ensuring smooth data integration from source systems to Azure Data Storage. I worked closely with data engineers, database administrators, and business analysts to design efficient ETL processes and secure data transfer.
Outcome: The project improved data quality and streamlined bulk data copying to Azure Data Lake Storage Gen2, enhancing the organization’s ability to make data-driven decisions and boosting operational efficiency.