Abstract: Regression analysis remains a cornerstone of empirical research. Most existing approaches are based on least squares which can be quite sensitive to outliers if the data distribution is far from Gaussianity. This paper proposes Distributionally Adaptive Regression Estimators (DARE), a framework designed to enhance both robustness and efficiency, particularly in settings where classical least squares methods exhibit poor performance. DARE efficiently aggregates distributional information through a weighted aggregation of quantiles, improving robustness to the unknown distribution of error terms. Under suitable regularity conditions, DARE achieves the same asymptotic variance as the infeasible likelihood-based methods while maintaining the same convergence rate as conventional estimators. A wide range of regression models are investigated. I establish asymptotic properties under both series and local linear frameworks. In addition, I develop machine learning implementations based on random forests that preserve asymptotic validity, and incorporate high-dimensional linear settings via lasso to obtain efficient estimators for the nonzero coefficients. Extensive simulations demonstrate substantial finite-sample improvements. Finally, I apply DARE in two empirical analyses of treatment effects. In both cases, DARE produces point estimates comparable to conventional methods while delivering notably tighter confidence intervals.