# The reciprocal Bayesian LASSO.

@article{Mallick2021TheRB, title={The reciprocal Bayesian LASSO.}, author={Himel Mallick and Rahim Alhamzawi and Vladimir Svetnik}, journal={Statistics in medicine}, year={2021} }

A reciprocal LASSO (rLASSO) regularization employs a decreasing penalty function as opposed to conventional penalization approaches that use increasing penalties on the coefficients, leading to stronger parsimony and superior model selection relative to traditional shrinkage methods. Here we consider a fully Bayesian formulation of the rLASSO problem, which is based on the observation that the rLASSO estimate for linear regression parameters can be interpreted as a Bayesian posterior mode… Expand

#### 2 Citations

The reciprocal Bayesian bridge for left-censored data

- Mathematics
- Communications in Statistics - Simulation and Computation
- 2021

Bayesian reciprocal LASSO quantile regression

- Mathematics
- 2020

The reciprocal LASSO estimate for linear regression corresponds to a posterior mode when independent inverse Laplace priors are assigned on the regression coefficients. This paper studies reciproca...

#### References

SHOWING 1-10 OF 65 REFERENCES

The Bayesian Bridge

- Mathematics, Computer Science
- 2011

The Bayesian bridge model outperforms its classical cousin in estimation and prediction across a variety of data sets, both simulated and real and the Markov chain Monte Carlo algorithm for fitting the bridge model exhibits excellent mixing properties, particularly for the global scale parameter. Expand

Nonlocal Priors for High-Dimensional Estimation

- Mathematics, Medicine
- Journal of the American Statistical Association
- 2017

The constructive representation of NLPs as mixtures of truncated distributions that enables simple posterior sampling and extending NLPs beyond previous proposals are outlined, showing that selection priors may actually be desirable for high-dimensional estimation. Expand

Decoupling Shrinkage and Selection in Bayesian Linear Models: A Posterior Summary Perspective

- Mathematics
- 2014

Selecting a subset of variables for linear models remains an active area of research. This article reviews many of the recent contributions to the Bayesian model selection and shrinkage prior… Expand

Bayesian adaptive Lasso

- Computer Science, Mathematics
- 2010

This work provides a model selection machinery for the BaLasso by assessing the posterior conditional mode estimates, motivated by the hierarchical Bayesian interpretation of the Lasso, and provides a unified framework for variable selection using flexible penalties. Expand

Penalized regression, standard errors, and Bayesian lassos

- Mathematics
- 2010

Penalized regression methods for simultaneous variable selection and coe-cient estimation, especially those based on the lasso of Tibshirani (1996), have received a great deal of attention in recent… Expand

The horseshoe estimator for sparse signals

- Mathematics
- 2010

This paper proposes a new approach to sparsity, called the horseshoe estimator, which arises from a prior based on multivariate-normal scale mixtures. We describe the estimator's advantages over… Expand

Moments of a Class of Internally Truncated Normal Distributions

- Mathematics
- 2007

Moment expressions are derived for the internally truncated normal distributions commonly applied to screening and constrained problems. They are obtained from using a recursive relation between the… Expand

GWASinlps: non‐local prior based iterative SNP selection tool for genome‐wide association studies

- Medicine, Computer Science
- Bioinform.
- 2019

A variable selection method, named, iterative non‐local prior based selection for GWAS, or GWASinlps, that combines the computational efficiency of the screen‐and‐select approach based on some association learning and the parsimonious uncertainty quantification provided by the use of non‐ local priors in an iterative variable selection framework. Expand

An overview of reciprocal L1-regularization for high dimensional regression data

- Mathematics
- 2018

High dimensional data plays a key role in the modern statistical analysis. A common objective for the high dimensional data analysis is to perform model selection, and penalized likelihood method is… Expand

High-Dimensional Variable Selection With Reciprocal L1-Regularization

- Mathematics
- 2015

During the past decade, penalized likelihood methods have been widely used in variable selection problems, where the penalty functions are typically symmetric about 0, continuous and nondecreasing in… Expand