In a federated learning (FL) system, distributed clients upload their local
models to a central server to aggregate into a global model. Malicious clients
may plant backdoors into the global model through uploading poisoned local
models, causing images with specific patterns to be misclassified into some
target labels. Backdoors planted by current attacks are not durable, and vanish
quickly once the attackers stop model poisoning. In this paper, we investigate
the connection between the durability of FL backdoors and the relationships
between benign images and poisoned images (i.e., the images whose labels are
flipped to the target label during local training). Specifically, benign images
with the original and the target labels of the poisoned images are found to
have key effects on backdoor durability. Consequently, we propose a novel
attack, Chameleon, which utilizes contrastive learning to further amplify such
effects towards a more durable backdoor. Extensive experiments demonstrate that
Chameleon significantly extends the backdoor lifespan over baselines by
$1.2times sim 4times$, for a wide range of image datasets, backdoor types,
and model architectures.