The \(\ell_p\)-norm regression problem is a classic problem in optimization with wide ranging applications in machine learning and theoretical computer science. The goal is to compute \(x^{\star} =\arg\min_{Ax=b}\|x\|^p_p\), where \(x^{\star}\in \mathbb{R}^n,A\in \mathbb{R}^{d\times n},b \in \mathbb{R}^d\) and \(d\leq n\). Efficient high-accuracy algorithms for the problem have been challenging both in theory and practice and the state of the art algorithms require \(poly(p)\cdot n^{\frac{1}{2}-\frac{1}{p}}\) linear system solves for \(p\geq 2\). In this paper, we provide new algorithms for \(\ell_p\)-regression (and a more general formulation of the problem) that obtain a high-accuracy solution in \(O(p n^{\frac{p-2}{3p-2}})\) linear system solves. We further propose a new inverse maintenance procedure that speeds-up our algorithm to \(\tilde{O}(n^{\omega})\) total runtime, where \(O(n^{\omega})\) denotes the running time for multiplying \(n \times n\) matrices. Additionally, we give the first Iteratively Reweighted Least Squares (IRLS) algorithm that is guaranteed to converge to an optimum in a few iterations. Our IRLS algorithm has shown exceptional practical performance, beating the currently available implementations in MATLAB/CVX by 10-50x.