Relying on random block-coordinate primal-dual methods, we designe distributed algorithms for minimizing a sum of (non-)smooth convex functions involving linear operators. Distributed methods have the ability to deal with multi-agent problems where the performed updates are limited to a neighborhood of a small number of agents, the set of active agents being selected according to an arbitrary random rule. We prove the almost sure convergence of the iterates to a solution of the considered problem. When the non-smooth functions are chosen as indicator functions of convex sets, the proposed algorithms can be viewed as distributed versions of alternating projections onto convex sets techniques to solve constrained optimization problems.