Thanks for your help!


I'm indeed trying to use spark.databricks.delta.schema.autoMerge.enabled configuration,

I set the config using the following command

spark.conf.set("spark.databricks.delta.schema.autoMerge.enable","true")


and wrote my merge command as below:

Target_Table = DeltaTable.forPath(spark, Target_Table_path)

# Insert non existing records in the Target table, update existing records with end_date and ActiveRecord = 0

Target_Table.alias('dwh')\

.merge(

Source_Table_dataframe.alias('updates'),

'(dwh.Key == updates.Key)'

)\

.whenMatchedUpdate(set =

{

"end_date": "date_sub(current_date(), 1)",

"ActiveRecord": "0"

}

) \

.whenNotMatchedInsertAll()\

.execute()


but get an error message can not resolve column1 in INSERT clause given columns with the list of the source table in which column1 does not exist anymore.