Efficient algorithms and lower bounds for robust linear regression
We study the prototypical problem of high-dimensional linear regression in a robust model
where an ε-fraction of the samples can be adversarially corrupted. We focus on the
fundamental setting where the covariates of the uncorrupted samples are drawn from a
Gaussian distribution N (0,∑) on ℝ d. We give nearly tight upper bounds and computational
lower bounds for this problem. Specifically, our main contributions are as follows: For the
case that the covariance matrix is known to be the identity, we give a sample near-optimal …
where an ε-fraction of the samples can be adversarially corrupted. We focus on the
fundamental setting where the covariates of the uncorrupted samples are drawn from a
Gaussian distribution N (0,∑) on ℝ d. We give nearly tight upper bounds and computational
lower bounds for this problem. Specifically, our main contributions are as follows: For the
case that the covariance matrix is known to be the identity, we give a sample near-optimal …
[PDF][PDF] Efficient Algorithms and Lower Bounds for Robust Linear Regression
W Kong - pdfs.semanticscholar.org
Observation: to change the mean by a constant, the corrupted samples must be put 1/𝜖 far
away from 𝜇, simply because there is only 𝜖 fraction of corruption! Variance will increase by
1/𝜖 F⋅ 𝜖= 1/𝜖. For small 𝜖, will be able to find the variance is abnormally large. 𝜇
Proposition: Variance large
away from 𝜇, simply because there is only 𝜖 fraction of corruption! Variance will increase by
1/𝜖 F⋅ 𝜖= 1/𝜖. For small 𝜖, will be able to find the variance is abnormally large. 𝜇
Proposition: Variance large
Showing the best results for this search. See all results