Weakly supervised temporal action localization aims at learning the instance-level action pattern from the video-level labels, where a challenge is action-context confusion. To overcome this challenge, one recent work builds an action-click supervision framework. It requires similar annotation costs but can steadily improve the localization performance when compared to the conventional weakly supervised methods. In this paper, we find a stronger action localizer can be trained with the same annotation costs if the labels are annotated on the background video frames, because the performance bottleneck of the existing approaches mainly comes from the background errors. To this end, we convert the action-click supervision to the background-click supervision and develop a novel method, called BackTAL. BackTAL implements two-fold modeling on the background video frames, i.e. the position modeling and the feature modeling. In position modeling, we not only conduct supervised learning on the annotated video frames but also design a score separation module to enlarge the score differences between the potential action frames and backgrounds. In feature modeling, we propose an affinity module to measure frame-specific similarities among neighboring frames and dynamically attend to informative neighbors when calculating temporal convolution. Experiments on three benchmarks demonstrate the high performance of our BackTAL.