We give improved algorithms for the \(\ell_{p}\)-regression problem, \(\min_{x} \|x\|^p_p$ such that \(Ax=b\) for all \(p \in (1,2) \cup (2,\infty)\). Our algorithms obtain a high accuracy solution in \(\tilde{O_p}(m^{\frac{|p-2|}{2p + |p-2|}}) \leq \tilde{O_p}(m^{\frac{1}{3}})\) iterations, where each iteration requires solving an \(m \times m\) linear system, with \(m\) being the dimension of the ambient space. Incorporating a procedure for maintaining an approximate inverse of the linear systems that we need to solve at each iteration, we give algorithms for solving \(\ell_{p}\)-regression to \(1 /poly(n)\) accuracy that runs in time \(\tilde{O_p}(m^{\max\{\omega, 7/3\}})\), where \(\omega\) is the matrix multiplication constant. For the current best value of \(\omega > 2.37\), this means that we can solve \(\ell_{p}\) regression as fast as \(\ell_{2}\) regression, for all constant \(p\)bounded away from 1. Our algorithms can be combined with nearly-linear time solvers for linear systems in graph Laplacians to give minimum \(\ell_{p}\)-norm flow / voltage solutions to \(1 / poly(n)\) accuracy on an undirected graph with \(m\) edges in \(\tilde{O_p}(m^{1 + \frac{|p-2|}{2p + |p-2|}}) \le \tilde{O_p}(m^{\frac{4}{3}})\) time. For sparse graphs and for matrices with similar dimensions, our iteration counts and running times improve upon the \(p\)-norm regression algorithm by [Bubeck-Cohen-Lee-Li STOC`18], as well as general purpose convex optimization algorithms. At the core of our algorithms is an iterative refinement scheme for \(\ell_{p}\)-norms, using the quadratically-smoothed \(\ell_{p}\)-norms introduced in the work of Bubeck \etal. Formally, given an initial solution, we construct a problem that seeks to minimize a quadratically-smoothed \(\ell_{p}\) norm over a subspace, such that a crude solution to this problem allows us to improve the initial solution by a constant factor, leading to algorithms with fast convergence.