Trigger Demo: Since UltraEdit (https://huggingface.co/spaces/jeasinema/UltraEdit-SD3) has sunseted the demo page. We only show the demo video here, we will open-source all code once our work get accepted.
The image can be download on: https://xingangpan.github.io/projects/CULane.html
SAM2 github: https://github.com/facebookresearch/sam2 , we use sam2.1_hiera_large.pt as the segment model
Based on the generated heatmap, we localize the region of interest within the original image. Subsequently, we crop a 512×512 patch from the original image, which is saved as crop.jpg. The corresponding mask indicating the target region for trigger insertion is provided in crop_mask.jpg.
In the UltraEdit framework, the cropped image (crop.jpg), its associated mask (crop_mask.jpg), and the textual prompt “Add small and sparse brown mud spots on the road surface” are provided as inputs. Surrounding objects and lane markings near the masked area are included in the ground truth annotations to supervise the diffusion-based generation process through loss functions. The synthesized content is subsequently merged into the original image to produce the final output.
Trigger Visulization
Visualization of our injected mud trigger on the CULane dataset. For each method (including LaneATT, ADNet,
RESA, and SCNN), we show the clean input image, the strategy-specific gradient heatmaps used for trigger placement, and
the corresponding poisoned images under LDA and LOA/LRA attacks. The highlighted regions indicate high-sensitivity areas
where the trigger is placed to maximize the attack effect.