Due to its importance, video segmentation has regained interest recently. However, there is no common agreement about the necessary ingredients for best performance. This work contributes a thorough analysis of various within- and between-frame affinities suitable for video segmentation. Our results show that a frame-based superpixel segmentation combined with a few motion-based affinities are sufficient to obtain good video segmentation performance. A second contribution of the paper is the extension of [Arbelaez et al. TPAMI'11] to include motion-cues, which makes the algorithm globally aware of motion thus improving its performance for video sequences. We further contribute an extension of an established image segmentation benchmark to videos, which allows benchmarking coarse-to-fine video segmentations on multiple human annotations. Our results are tested on the Berkeley motion segmentation dataset [Brox and Malik, ECCV'10], and compared to existing methods.