• Yuchen Wu, Minshuo Chen, Zihao Li, Mengdi Wang, Yuting Wei (2024). Theoretical Insights for Diffusion Guidance: A Case Study for Gaussian Mixture Models.

  • Yuchen Wu*, Kangjie Zhou* (2024). Sharp Analysis of Power Iteration for Tensor PCA.

  • Song Mei*, Yuchen Wu* (2023). Deep networks as denoising algorithms: sample-efficient learning of diffusion models in high-dimensional graphical models.

  • Andrea Montanari*, Yuchen Wu* (2023). Posterior sampling from the spiked models via diffusion processes.

  • Zihan Wei, Sarfaraz Alam, Miki Verma, Margaret Hilderbran, Yuchen Wu, Brandon Anderson, Daniel E Ho, Jenny Suckale (2022). Integrating water quality data with a bayesian network model to improve spatial and temporal phosphorus attribution: application to the Maumee river basin.

  • Andrea Montanari*, Yuchen Wu* (2022). Fundamental limits of low-rank matrix estimation with diverging aspect ratios.

  • Andrea Montanari*, Yuchen Wu* (2022). Statistically optimal first order algorithms: a proof via orthogonalization.

  • Pratik Patil*, Yuchen Wu*, Ryan Tibshirani (2024). Failures and successes of cross-validation for early-stopped gradient descent in high-dimensional least squares. The 27th International Conference on Artificial Intelligence and Statistics (AISTATS, Oral).

  • Yuchen Wu*, Kangjie Zhou* (2023). Lower bounds for the convergence of tensor power iteration on random overcomplete models. The Thirty Sixth Annual Conference on Learning Theory (COLT).

  • Andrea Montanari*, Yuchen Wu* (2023). Adversarial examples in random neural networks with general activations. Mathematical Statistics and Learning.

  • Yuchen Wu, Jakab Tardos, MohammadHossein Bateni, AndrĂ© Linhares, Filipe Miguel Goncalves de Almeida, Andrea Montanari, Ashkan Norouzi-Fard (2021). Streaming belief propagation for community detection. Advances in Neural Information Processing Systems (NeurIPS).

  • Michael Celentano*, Andrea Montanari*, Yuchen Wu* (2020). The estimation error of general first order methods. The Thirty Third Annual Conference on Learning Theory (COLT).


    * Alphabetical

  • Statistical and computational problems in low-rank matrix estimation. Ph.D. thesis, Stanford University, 2023.