U-net neural networks have provided very impressive results and are now considered as state of art techniques for many inverse problems. But we do not necessarily understand the reasons which make U-nets so powerful. Sparsity techniques have dominated for many years the field, but are clearly less efficient than U-net. But contrary to U-nets, sparsity is well understood and present theoretical guarantees which make sparse recovery still an interesting approach, especially for medical applications. Many papers have underlined similarities between U-nets and sparse recovery. Here, we present a new approach, the learnlet transform, which belongs to the family of sparse decompositions, but using all ingredients of neural networks. We show through denoising experiments that learnlet outperforms other sparse denoising techniques, being however less efficient that U-net in term of MSE. Learnlet denoising presents very interesting properties such as a much smaller number of parameters to learn, a full understanding of how it works and a very important, a better generalization.