julia – NLopt SLSQP丢弃了良好的解决方案,支持更老,更糟的解决方案

我正在从财务 – 投资组合优化中解决标准优化问题.绝大多数时候,NLopt正在回归一个明智的解决方案.然而,在极少数情况下,SLSQP算法似乎迭代到正确的解决方案,然后由于没有明显的原因,它选择从大约三分之一的方向返回一个非常明显不理想的迭代过程的解决方案.有趣的是,将初始参数向量改变很小的量可以解决问题.

我设法隔离了一个相对简单的工作示例,说明我正在谈论的行为.抱歉这些数字有点凌乱.这是我能做的最好的事情.可以将以下代码剪切并粘贴到Julia REPL中,并在每次NLopt调用目标函数时运行并打印目标函数和参数的值.我将优化程序称为两次.如果你向后滚动通过下面代码打印的输出,你会注意到第一次调用时,优化例程迭代到一个目标函数值为0.0022的好解决方案但是由于没有明显的原因可以回到更早的时候目标函数为0.0007的解决方案,并返回它.第二次调用优化函数时,我使用了一个稍微不同的参数起始向量.同样,优化例程迭代到相同的良好解决方案,但这次它返回目标函数值为0.0022的良好解决方案.

所以,问题是:有没有人知道为什么在第一种情况下,SLSQP在迭代过程中只有大约三分之一的时间内放弃了更好的解决方案而转向更差的解决方案?如果是这样,有什么办法可以解决这个问题吗?

#-------------------------------------------
#Load NLopt package
using NLopt
#Define objective function for the portfolio optimisation problem (maximise expected return subject to variance constraint)
function obj_func!(param::Vector{Float64}, grad::Vector{Float64}, meanVec::Vector{Float64}, covMat::Matrix{Float64})
    if length(grad) > 0
        tempGrad = meanVec - covMat * param
        for j = 1:length(grad)
            grad[j] = tempGrad[j]
        end
        println("Gradient vector = " * string(grad))
    end
    println("Parameter vector = " * string(param))
    fOut = dot(param, meanVec) - (1/2)*dot(param, covMat*param)
    println("Objective function value = " * string(fOut))
    return(fOut)
end
#Define standard equality constraint for the portfolio optimisation problem
function eq_con!(param::Vector{Float64}, grad::Vector{Float64})
    if length(grad) > 0
        for j = 1:length(grad)
            grad[j] = 1.0
        end
    end
    return(sum(param) - 1.0)
end
#Function to call the optimisation process with appropriate input parameters
function do_opt(meanVec::Vector{Float64}, covMat::Matrix{Float64}, paramInit::Vector{Float64})
    opt1 = Opt(:LD_SLSQP, length(meanVec))
    lower_bounds!(opt1, [0.0, 0.0, 0.05, 0.0, 0.0, 0.0])
    upper_bounds!(opt1, [1.0, 1.0, 1.0, 1.0, 1.0, 1.0])
    equality_constraint!(opt1, eq_con!)
    ftol_rel!(opt1, 0.000001)
    fObj = ((param, grad) -> obj_func!(param, grad, meanVec, covMat))
    max_objective!(opt1, fObj)
    (fObjOpt, paramOpt, flag) = optimize(opt1, paramInit)
    println("Returned parameter vector = " * string(paramOpt))
    println("Return objective function = " * string(fObjOpt))
end
#-------------------------------------------
#Inputs to optimisation
meanVec = [0.00238374894628471,0.0006879970888824095,0.00015027322404371585,0.0008440624572209092,-0.004949409024535505,-0.0011493778903180567]
covMat = [8.448145928621056e-5 1.9555283947528615e-5 0.0 1.7716366331331983e-5 1.5054664977783003e-5 2.1496436765051825e-6;
          1.9555283947528615e-5 0.00017068536691928327 0.0 1.4272576023325365e-5 4.2993023110905543e-5 1.047156519965148e-5;
          0.0 0.0 0.0 0.0 0.0 0.0;
          1.7716366331331983e-5 1.4272576023325365e-5 0.0 6.577888700124854e-5 3.957059294420261e-6 7.365234067319808e-6
          1.5054664977783003e-5 4.2993023110905543e-5 0.0 3.957059294420261e-6 0.0001288060347757139 6.457128839875466e-6
          2.1496436765051825e-6 1.047156519965148e-5 0.0 7.365234067319808e-6 6.457128839875466e-6 0.00010385067478418426]
paramInit = [0.0,0.9496114216578236,0.050388578342176464,0.0,0.0,0.0]

#Call the optimisation function
do_opt(meanVec, covMat, paramInit)

#Re-define initial parameters to very similar numbers
paramInit = [0.0,0.95,0.05,0.0,0.0,0.0]

#Call the optimisation function again
do_opt(meanVec, covMat, paramInit)

注意:我知道我的协方差矩阵是正半确定的,而不是正确的.这不是问题的根源.我已经通过将零行的对角线元素更改为一个小但非常非零的值来证实了这一点.上述示例中仍然存在该问题,以及我可以随机生成的其他示例.

最佳答案 SLSQP是一种约束优化算法.每一轮都必须检查具有最佳目标值并满足约束条件.满足约束时,最终输出是最佳值.

通过更改eq_con打印出约束的值!至:

function eq_con!(param::Vector{Float64}, grad::Vector{Float64})
    if length(grad) > 0
        for j = 1:length(grad)
            grad[j] = 1.0
        end
    end
    @show sum(param)-1.0
    return(sum(param) - 1.0)
end

显示第一次运行中的最后一个有效评估点:

Objective function value = 0.0007628202546187453
sum(param) - 1.0 = 0.0

在第二次运行中,所有评估点都满足约束条件.这解释了行为,并表明它是合理的.

附录:

导致参数不稳定的基本问题是等式约束的确切性质.引用NLopt参考文献(http://ab-initio.mit.edu/wiki/index.php/NLopt_Reference#Nonlinear_constraints):

For equality constraints, a small positive tolerance is strongly advised in order to allow NLopt to converge even if the equality constraint is slightly nonzero.

确实,切换了equality_constraint!在do_opt中调用

    equality_constraint!(opt1, eq_con!,0.00000001)

为两个初始参数提供0.0022解决方案.

点赞