Hyperparameter optimization is a classical problem in machine learning. Traditional hyperparameter optimization methods only consider finite candidates which conflicts the fact that the ultimate goal of hyperparameter optimization is to find the model in the whole parameter space. This talk will discuss how to solve the hyperparameter optimization problem with one parameter, multiple parameters or super multiple parameters in the whole parameter space. Specifically, 1) for one parameter, we propose the generalized error path algorithm to find the model with the minimum cross validation error in the whole parameter space. 2) For cost sensitive support vector machine with two parameters, we propose a solution and error surface algorithm, and try to find the model with the minimum cross validation error in the whole parameter space; 3) For super multiple parameters, we propose a new hyperparameter optimization method with zeroth-order hyper-gradients which enjoys the properties of simplicity, scalability, flexibility, effectiveness and efficiency.