|StatML (Statistics & Machine Learning) Working Group||DELPHI, Epidemiological Forecasting Group||CompStat (Computational Statistics) Group -- coming soon|
Research papers(The headings refer to the completion dates; the publication dates are given at the end of the citations.)
- Trevor Hastie, Robert Tibshirani, and Ryan Tibshirani. Extended Comparisons of Best Subset Selection, Forward Stepwise Selection, and the Lasso (Following "Best Subset Selection from a Modern Optimization Lens" by Bertsimas, King, and Mazumder, 2016).
- Saharon Rosset and Ryan Tibshirani. From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation. To appear, Journal of the American Statistical Association.
- Veeranjaneyulu Sadhanala, Yu-Xiang Wang, James Sharpnack, and Ryan Tibshirani. Higher-Order Total Variation Classes on Grids: Minimax Theory and Trend Filtering Methods. Neural Information Processing Systems, 2017.
- Kevin Lin, James Sharpnack, Alessandro Rinaldo, and Ryan Tibshirani. A Sharp Error Analysis for the Fused Lasso, with Application to Approximate Changepoint Screening. Neural Information Processing Systems, 2017.
- Ryan Tibshirani. Dykstra's Algorithm, ADMM, and Coordinate Descent: Connections, Insights, and Extensions. Neural Information Processing Systems, 2017.
- Veeranjaneyulu Sadhanala and Ryan Tibshirani. Additive Models with Trend Filtering.
- Ryan Tibshirani and Saharon Rosset. Excess Optimism: How Biased is the Apparent Error of an Estimator Tuned by SURE? To appear, Journal of the American Statistical Association.
- Oscar Hernan Madrid Padilla, James Sharpnack, James Scott, and Ryan Tibshirani. The DFS Fused Lasso: Linear-Time Denoising over General Graphs. To appear, Journal of Machine Learning Research.
- Sangwon Hyun, Max G'Sell, and Ryan Tibshirani. Exact Post-Selection Inference for Changepoint Detection and Other Generalized Lasso Problems. To appear, Electronic Journal of Statistics.
- Alnur Ali, Zico Kolter, and Ryan Tibshirani. The Multiple Quantile Graphical Model. Neural Information Processing Systems, 2016.
- Veeranjaneyulu Sadhanala, Yu-Xiang Wang, and Ryan Tibshirani. Total Variation Classes Beyond 1d: Minimax Rates, and the Limitations of Linear Smoothers. Neural Information Processing Systems, 2016.
- Jing Lei, Max G'Sell, Alessandro Rinaldo, Ryan Tibshirani, and Larry Wasserman. Distribution-Free Predictive Inference for Regression. To appear, Journal of the American Statistical Association.
- David Farrow, Logan Brooks, Sangwon Hyun, Ryan Tibshirani, Donald Burke, and Roni Rosenfeld. A Human Judgment Approach to Epidemiological Forecasting. PLOS Computational Biology, Vol. 13, No. 3, 1-19, 2017.
- William Fithian, Jonathan Taylor, Robert Tibshirani, and Ryan Tibshirani. Selective Sequential Model Selection.
- Veeranjaneyulu Sadhanala, Yu-Xiang Wang, and Ryan Tibshirani. Graph Sparsification Approaches for Laplacian Smoothing. International Conference on Artificial Intelligence and Statistics, 2016.
- Ryan Tibshirani, Alessandro Rinaldo, Robert Tibshirani, and Larry Wasserman. Uniform Asymptotic Inference and the Bootstrap After Model Selection. To appear, Annals of Statistics.
- Logan Brooks, David Farrow, Sangwon Hyun, Ryan Tibshirani, and Roni Rosenfeld. Flexible Modeling of Epidemics with an Empirical Bayes Framework. PLOS Computational Biology, Vol. 11, No. 8, 1-18, 2015.
- Yen-Chi Chen, Christopher Genovese, Ryan Tibshirani, and Larry Wasserman. Nonparametric Modal Regression. Annals of Statistics, Vol. 44, No. 2, 489-514, 2016.
- Ryan Tibshirani. A General Framework for Fast Stagewise Algorithms. Journal of Machine Learning Research, Vol. 16, 2543-2588, 2015.
- Aaditya Ramdas and Ryan Tibshirani. Fast and Flexible ADMM Algorithms for Trend Filtering. Journal of Computational and Graphical Statistics, Vol. 25, No. 3, 839-858, 2016.
- Yu-Xiang Wang, James Sharpnack, Alex Smola, and Ryan Tibshirani. Trend Filtering on Graphs. Journal of Machine Learning Research, Vol. 17, 1-41, 2016.
- Taylor Arnold and Ryan Tibshirani. Efficient Implementations of the Generalized Lasso Dual Path Algorithm. Journal of Computational and Graphical Statistics, Vol. 25, No. 1, 1-27, 2016.
- Ryan Tibshirani. Degrees of Freedom and Model Search. Statistica Sinica, Vol. 25, No. 3, 1265-1296, 2015.
- Yu-Xiang Wang, Alex Smola, and Ryan Tibshirani. The Falling Factorial Basis and Its Statistical Applications. International Conference on Machine Learning, 2014.
- Ryan Tibshirani, Jonathan Taylor, Richard, Lockhart, and Robert Tibshirani. Exact Post-selection Inference for Sequential Regression Procedures. Journal of the American Statistical Association, Vol. 111, No. 514, 600-620, 2016.
- Jonathan Taylor, Joshua Loftus, and Ryan Tibshirani. Inference in Adaptive Regression via the Kac-Rice Formula. Annals of Statistics, Vol. 44, No. 2, 743-770, 2016.
- Ryan Tibshirani. Adaptive Piecewise Polynomial Estimation via Trend Filtering. Annals of Statistics, Vol. 42, No. 1, 285-323, 2014.
- Richard Lockhart, Jonathan Taylor, Ryan Tibshirani, and Robert Tibshirani. A Significance Test for the Lasso. Annals of Statistics, Vol. 42, No. 2, 413-468, 2014.
- Ryan Tibshirani and Jonathan Taylor. Degrees of Freedom in Lasso Problems. Annals of Statistics, Vol. 40, No. 2, 1198-1232, 2012.
- Robert Tibshirani, Jacob Bien, Jerome Friedman, Trevor Hastie, Noah Simon, Jonathan Taylor, and Ryan Tibshirani. Strong Rules for Discarding Predictors in Lasso-Type Problems. Journal of the Royal Statistical Society: Series B, Vol. 74, No. 2, 245-266, 2012.
- Ryan Tibshirani, Holger Hoefling, and Robert Tibshirani. Nearly-Isotonic Regression. Technometrics, Vol. 53, No. 1, 54-61, 2011.
- Ryan Tibshirani and Jonathan Taylor. The Solution Path of the Generalized Lasso. Annals of Statistics, Vol. 39, No. 3, 1335-1371, 2011.
A Statistician Plays Darts
Ryan Tibshirani, Andrew Price and Jonathan Taylor
Darts is a popular game, played both in the pub and at a professional level. Yet most players aim for the highest scoring region of the board (triple 20), regardless of their skill level. It turns out that this is not always the optimal strategy! We describe a method for a player to obtain a customized heatmap of the dartboard. In this heatmap, the bright regions correspond to aiming locations which yield high (expected) scores. We also investigate alternate arrangements of the numbers 1 through 20, in an attempt to make scoring more difficult.
Get a personalized heatmap!Ever wonder where you should be aiming your dart throws? We've developed an algorithm so that you can enter the scores of 50 or so dart throws aimed at the double bullseye, and get a personalized heatmap in return.
- If you are comfortable with the R programming language: R package
- If you'd rather use a point-and-click tool (and your browser has Java installed): Java applet
(and if you're curious, the Java source: DartsApplet.java, Stats.java)
MoviesHere are some movies showing the path of optimal aiming locations, for the various dartboard arrangements discussed in the supplementary paper. The path is defined by increasing the marginal variance in the simple Gaussian model. It works best to save them to your computer and then play them.
|ryantibs at cmu dot edu||adp162 at gmail dot com||jtaylo at stanford dot edu|
Ryan Tibshirani and Andy Price were graduate students at Stanford, with Ryan in Statistics and Andy in Electrical Engineering. Jon Taylor is a Professor of Statistics at Stanford and was Ryan's Ph.D. advisor. Ryan is now in the Statistics department at Carnegie Mellon and Andy is at Lab126.
We'd like to thank Rob Tibshirani for his many great suggestions during the development of the project. We'd also like to thank Patrick Chaplin for his eager help concerning the history of the dartboard's arrangement.
Click here to go back to Ryan's research page.