Our solver currently is solving optimization problem
$$\min_{A} \frac{1}{2}\|K(t) - (I + A)K(0)(I + A^\top)\|^2 + \frac{\lambda}{2} \|A\|^2, \quad\mathrm{s.t.}~A_{i,j} = 0, \forall(i, j) \in \mathcal{I}$$
where, $\mathcal{I} = \{(i, j)| K(t)^{-1}_{i,j} = 0, K(0)^{-1}_{i,j} = 0\}$.
Since $K(t)$ and $K(0)$ are variance covariance matrices, the mask $\mathcal{I}$ is symmetric. However in the application we are interested in, we want either $A_{i, j} \ne 0$ or $A_{j, i} \ne 0$. Therefore we need additional penalization on top of current mask.
Potential solution:
Here we propose to use trimmed $\ell_1$ regularizer, follow the work at
http://proceedings.mlr.press/v97/yun19a/yun19a.pdf
And we change the objective to
$$\min_{A,w} \frac{1}{2}\|K(t) - (I + A)K(0)(I + A^\top)\|^2 + \frac{\lambda}{2} \|A\|^2 + \tau\sum_{(i, j) \in \mathcal{\bar I}_{--}} (w_{i,j} |A_{i, j}| + (1 - w_{i, j})|A_{j, i}|)$$
and we have the constraints
$$A_{i,j} = 0, \forall(i, j) \in \mathcal{I},~\text{and}~w_{i, j}\in[0,1], \forall (i, j)\in\mathcal{\bar I}_{--}$$
where $\mathcal{\bar I}_{--} = \{(i, j)|i < j, K(t)^{-1}_{i,j} \ne 0~\mathrm{or}~K(0)^{-1}_{i,j} \ne 0\}$.
In this case, we hope $w_{i,j}$ will reach to $0$ or $1$ and we will only penalize $A_{i,j}$ or $A_{j,i}$ with $\ell_1$ norm and bring it to zero.