You don’t need to be concerned with non-normality of either dependent or independent variables.
The theory allowing strong interpretation of least squares estimates assumes normality of the “error term.” If you are in the happy but rare situation of having only causes as independent variables and an effect as dependent variable, the error term represents the total causal impact of all variables not included. Otherwise it just measures the incompleteness of your set of predictors. Either way, it is something not observed, which is why making strong assumptions (such as normal distribution) is required. For the stuff you can actually observe, any distribution is fine.
The residuals from your regression are estimates of that error term. You should plot and test them, and if their distribution is far from normal reconsider the model. Improving the model should start with knowledge of the subject, so nothing I can say in general should be high priority. But reconsider the linearity assumption. Perhaps the true relationship is closer to linear in logarithms, for example, than in raw form.