This notebook can be accessed through mybinder.org. Click to load the project: binder

Tip

The documents proceed with three example economies with different features:

  • the first one features damage function uncertainty and its resolution (demostrated in this notebook);

  • a suppliment to the discussion of \(\xi_r\), we also demonstrate the impact of smooth ambiguity \(\xi_a\) in the following section (Section 4.3 smooth ambiguity);

  • the second features a novel uncertainty decomposition that incorporates robustness to model ambiguity and misspecification (Section 5);

  • the investigates the impact of uncertain advances in the availability of less carbon-intensive technologies (Section 6).

4 Illustrative economy I

We pose an \(AK\) technology for which output is proportional to capital and can be allocated between investment and consumption. Capital in this specification should be broadly conceived. Suppose that there are adjustment costs to capital that are represented as the product of capital times a quadratic function of the investment-capital ratio. Given the output constraint and capital evolution imposed by the \(AK\) technology, it suffices to let the planner choose the investment-capital ratio.

Formally, “undamaged” capital evolves as

\[d K_t = K_t \left[ \mu_k (Z_t) dt + \left({\frac {I_t}{K_t}} \right)dt - {\frac { \kappa} 2} \left( {\frac {I_t} {K_t}} \right)^2 dt + \sigma_k(Z_t) dW_t^k \right]\]

where \(K_t\) is the capital stock and \(I_t\) is investment. The capital evolution expressed in logarithms is

\[d\log K_t = \left[ \mu_k (Z_t) + \left({\frac {I_t}{K_t}} \right) - {\frac { \kappa} 2} \left( {\frac {I_t} {K_t}} \right)^2 \right] dt - {\frac {\vert \sigma_k(Z_t) \vert^2} 2}dt+ \sigma_k(Z_t) dW_t^k ,\]

The sum of consumption, \(C_t\), and investment, \(I_t\), are constrained to be proportional to capital:

\[C_t + I_t = \alpha K_t\]

Next, we consider environmental damages. We suppose that temperature shifts proportionately consumption and capital by a multiplicative factor \(N_t\) that captures damages to the productive capacity induced by climate change. For instance, the damage adjusted consumption is \({\widetilde C}_t = {\frac {C_t}{N_t}}\) and the damage adjusted capital is \({\widetilde K}_t = {\frac {{K}_t }{N_t}}\).

Planner preferences are time-separable with a unitary elasticity of substitution. The planner’s instantaneous utility from “damaged consumption” and emissions is given by:

\[\begin{align*} & (1-\eta) \log {\tilde C}_t + \eta \log {\mathcal E}_t \cr & = (1-\eta)( \log C_t -\log K_t ) + (1-\eta)( \log K_t - \log N_t) + \eta \log {\mathcal E}_t \end{align*}\]

We let \(\delta\) be the subjective rate of discount used in preferences. We can think of emissions and consumption as distinct goods, or we can think of \(\widetilde{C}_t\) as an intermediate good that when combined with emissions determines final consumption.

Note

We obtain a further simplication by letting:

\[\widetilde{\mathcal{E}}_t = \mathcal{E}_t (\iota_y \cdot Z_t)\]

We use \(\widetilde{\mathcal{E}}_t\) as the control variable and then deduce the implications for \(\mathcal{E}_t\).

4.1 HJB equations and robustness

The uncertainty that we consider has a single jump point after which the damage function uncertainty is revealed. This leads us to compute continuation value functions conditioned on each of the damage function specifications. These continuation value functions then are used to summarize post-jump outcomes when we compute the initial value function. We describe the Hamilton-Jacobi-Bellman (HJB) equations for each of these steps in what follows. The computational methods are described in the appendix B.

The parameter values are as follows:

Parameters

values

\(\delta\)

0.01

\(\eta\)

0.032

\(\varsigma'\)

[2.23, 0, 0]

Damage parameters are described in section 3.

4.1.1 Post-jump continuation value functions

Conditioned on each of the damage functions, \(m = 1, 2, \dots, 20\). Solve for the corresponding \(\phi_m(y)\):

\[\begin{align*} 0 = \max_{\tilde e} \min_h \min_{\omega_j, \sum_{\ell =1}^L \omega_\ell = 1} & - \delta \phi_m(y) + \eta \log \tilde e \cr & + \frac {d \phi_m(y)}{d y} {\tilde e} \varsigma \cdot h + {\frac {(\eta - 1)} \delta }\left[\gamma_1 + \gamma_2 y + \gamma_3^m (y- {\overline y} ) \right] {\tilde e} \varsigma \cdot h + {\frac {\xi_r} 2} h'h \cr & + \frac {d \phi_m(y)}{d y} \sum_{\ell=1}^L \omega_\ell \theta_\ell {\tilde e} + {\frac 1 2} \frac {d^2 \phi_m(y)}{(dy)^2} \mid \varsigma\mid^2 \tilde e^2 \cr &+ {\frac {(\eta - 1)} \delta} \left( \left[ \gamma_1 + \gamma_2 y + \gamma_3^m (y - \overline y) \right] \sum_{\ell=1}^L \omega_\ell \theta_\ell {\tilde e} + {\frac 1 2} (\gamma_2 + \gamma_3^m) \mid \varsigma\mid^2 \tilde e^2 \right) \cr &+ \xi_a \sum_{\ell = 1}^L \omega_\ell \left( \log \omega_\ell - \log \pi_\ell \right). \end{align*}\]

4.1.2 Pre-jump value function

The pre-jump value function has a similar structure with two exceptions: - we include the intensity function discussed earlier and - we introduce robustness concerns for both the intensity and distribution over the alternative \(\gamma_3^m\) coefficients.

Given these modifications, we include:

\[\mathcal J (y) \sum_{m=1}^M g_m \pi_m \left[ \phi_m(\overline y) - \phi(y) \right] + \xi_r {\mathcal J}(y) \sum_{m=1}^M \pi_m \left( 1 - g_m + g_m \log g_m \right)\pi_m\]

in the HJB and solve for pre-jump value function \(\phi(y)\) on \([0, \overline{y}]\):

\[\begin{align*} 0 = \max_{\tilde e} \min_h \min_{\omega_j, \sum_{\ell =1}^L \omega_\ell = 1} \min_{g_m \geqslant 0} & - \delta \phi(y) + \eta \log \tilde e \cr & + \frac {d \phi(y)}{d y} {\tilde e} \varsigma \cdot h + {\frac {(\eta - 1)} \delta }\left[\gamma_1 + \gamma_2 y) \right] {\tilde e} \varsigma \cdot h + {\frac {\xi_r} 2} h'h \cr & + \frac {d \phi(y)}{d y} \sum_{\ell=1}^L \omega_\ell \theta_\ell {\tilde e} + {\frac 1 2} \frac {d^2 \phi(y)}{(dy)^2} \mid \varsigma\mid^2 \tilde e^2 \cr &+ {\frac {(\eta - 1)} \delta} \left( \left[ \gamma_1 + \gamma_2 y\right] \sum_{\ell=1}^L \omega_\ell \theta_\ell {\tilde e} + {\frac 1 2} \gamma_2 \mid \varsigma\mid^2 \tilde e^2 \right) \cr &+ \xi_a \sum_{\ell = 1}^L \omega_\ell \left( \log \omega_\ell - \log \pi_\ell \right)\cr &+ \mathcal J (y) \sum_{m=1}^M g_m \pi_m \left[ \phi_m(\overline y) - \phi(y) \right] + \xi_r {\mathcal J}(y) \sum_{m=1}^M \pi_m \left( 1 - g_m + g_m \log g_m \right)\pi_m \end{align*}\]

4.2 Uncertain damages

Here, we focus on different configuration of damage uncertainty. In the following computation, we keep \(\xi_a = 0.01\) as our default value and explore with different values of \(\xi_r\). The values we use here are \(\{+\infty, 5, 1, 0.3\}\).

Note: we use a baseline case when there is no distortion in weights on climate models, brownian misspecification and damage functions. For the baseline configuration, \(\xi_a = + \infty\), and \(\xi_r = + \infty\) (\(100,000\) in computation).

# packages
import os
import numpy as np
import pandas as pd
import plotly.graph_objects as go
from src.model import solve_hjb_y, solve_hjb_y_jump
from src.utilities import find_nearest_value, solve_post_jump
from src.simulation import simulate_jump, no_jump_simulation
import pickle

# Preference
η = 0.032
δ = 0.01

# Climate sensitivity
θ_list = pd.read_csv('data/model144.csv', header=None).to_numpy()[:, 0] / 1000.
πc_o = np.ones_like(θ_list) / len(θ_list)

# Damage functions
σ_y = 1.2 * np.mean(θ_list)
y_underline = 1.5
y_bar = 2.
γ_1 = 1.7675 / 10000
γ_2 = 0.0022 * 2
γ_3 = np.linspace(0., 1. / 3, 20)
πd_o = np.ones_like(γ_3) / len(γ_3)

# capital evolution
α = 0.115
i_over_k = 0.09
K0 = 85 / α

# state variable
y_step = .01
y_grid_long = np.arange(0., 5., y_step)
y_grid_short = np.arange(0., 2.1 + y_step, y_step)
n_bar = find_nearest_value(y_grid_long, y_bar)

# Prepare ϕ conditional on 20 different damage specifications
ξ_a = 0.01
ξ_r_list = [100_000, 5., 1., 0.3]
if not os.path.exists(f"data/v_list_{ξ_a}.pickle") and not os.path.exists(f"data/e_tilde_list_{ξ_a}.pickle"):
    v_list = {}
    e_tilde_list = {}
    for ξ_r, ξ_a in [(100_000, 100_000), (5., ξ_a), (1., ξ_a), (0.3, ξ_a)]:
        model_args_list = []
        for γ_3_m in γ_3:
            model_arg = (η, δ, σ_y, y_bar, γ_1, γ_2, γ_3_m, θ_list, πc_o, ξ_r, ξ_a)
            model_args_list.append((y_grid_long, model_arg, None, 1., 1e-8, 5_000, False))
        v_list[ξ_r], e_tilde_list[ξ_r] = solve_post_jump(y_grid_long, γ_3, solve_hjb_y, model_args_list)
    pickle.dump(v_list, open(f"data/v_list_{ξ_a}.pickle", "wb"))
    pickle.dump(e_tilde_list, open(f"data/e_tilde_list_{ξ_a}.pickle", "wb"))

v_list = pickle.load(open(f"data/v_list_{ξ_a}.pickle", "rb"))
e_tilde_list = pickle.load(open(f"data/e_tilde_list_{ξ_a}.pickle", "rb"))

# solve for pre jump HJB
if not os.path.exists(f"data/pre_jump_res_{ξ_a}.pickle"):
    pre_jump_res = {}
    ξ_r_list = [100_000, 5., 1., 0.3]
    for ξ_r_i in ξ_r_list:
        ϕ_list = v_list[ξ_r_i]
        certainty_equivalent = -ξ_r_i * np.log(
            np.average(
                np.exp(-1. / ξ_r_i * np.array(ϕ_list)), axis=0, weights=πd_o))
        # Change grid from 0-4 to 0-2
        ϕ_i = np.array(
            [temp[n_bar] * np.ones_like(y_grid_short) for temp in ϕ_list])

        # Compute ϕ with jump (impose boundary condition)
        if ξ_r_i == 100_000:
            ξ_a = 100_000
        else:
            ξ_a = 0.01
        model_args = (η, δ, σ_y, y_underline, y_bar, γ_1, γ_2, γ_3, θ_list, πc_o, ϕ_i, πd_o,
                      ξ_r_i, ξ_r_i, ξ_a)
        model_res = solve_hjb_y_jump(y_grid_short,
                                     model_args,
                                     v0=None,
                                     ϵ=1.,
                                     tol=1e-8,
                                     max_iter=5_000,
                                     print_iteration=False)
        simulation_res = no_jump_simulation(model_res, dt=1/4)
        pre_jump_res[ξ_r_i] = dict(model_res=model_res,
                               simulation_res=simulation_res,
                               certainty_equivalent=certainty_equivalent)
    pickle.dump(pre_jump_res, open(f"data/pre_jump_res_{ξ_a}.pickle", "wb"))

pre_jump_res = pickle.load(open(f"data/pre_jump_res_{ξ_a}.pickle", "rb"))

Robust adjustment to climate model uncertainty

Figure 5

For the 144 carbon-climate dynamic models, we take as our baseline probabilities an equal weighting of all of the models. To determine whether or not this is a reasonable choice of \(\xi_a\) to use in our analysis, we examine the implied distortion to the probability distribution of \(\theta_\ell\) values resulting from our choice as compared to the baseline prior probability distribution. Both the original prior probability distribution (red histogram) and the distorted probability distribution (blue histogram) of \(\theta_\ell\) values are given in the plot above. The increased concern about uncertainty over the geo-scientific inputs leads to a shift to the right in the \(\theta_\ell\) probability distribution, highlighting increased concerns about worst-case climate dynamics, while still maintaining a spread in the weights on the values of \(\theta_\ell\) and not loading all the weight on the far right tail. We, therefore, view this shift in the distribution as reasonable to entertain. The implied mean distortion is about 0.26 for the unknown parameter \(\theta\). While the concerns about geo-scientific uncertainty are state-dependent, the distortion in the probability distribution for \(\theta\) remains roughly constant over the course of our simulations.

Robust adjustments to damage function uncertainty

We next consider the penalty parameter \(\xi_r\) that governs concerns about misspecifying the Poisson jump process, including both the jump intensity and the probability distribution conditioned on a jump. Recall that we use this process to capture uncertainty of the steepness in the damage function and timing of when this steepness becomes known to the decision-maker. This uncertainty is only pertinent prior to the realization of the Poisson event. We report results for three different values of this parameter \(\xi_r = 5\), \(\xi_r = 1\), \(\xi_r = 0.3\) in Figure 6. The distorted histogram for the lowest value, \(\xi_r = 0.3\), is arguably extreme, although the other two choices seem considerably harder to dismiss.

Finally, in Figure 7, we display the probabilities that a jump will occur prior to the specified dates along the socially efficient trajectory for emissions. For the mid value in our discussion, \(\xi_r = 1\), the jump is pretty much assured to happen by about one hundred years out, at which point the temperature anomaly is 2 degrees Celsius. On so-called “business as usual” trajectories, the jump probabilities will converge to one much more quickly than \(\xi_r = 1\). The no-jump trajectories for different \(\xi_r\) are displayed below.

Emission and anomaly trajectories

The figure shows emission as a function of temperature anomaly.

For \(\underline y = 1.5\) and \(\overline y = 2\), and \(\underline y = 1.75\) and \(\overline y = 2.25\)

# Repeat for 1.75 - 2.25
y_underline_higher = 1.75
y_bar_higher = 2.25
# state variable
y_step = .01
y_grid_short_2 = np.arange(0., 2.3 + y_step, y_step)
n_bar = find_nearest_value(y_grid_long, y_bar_higher)

# post jump value functions
if not os.path.exists(f"data/v175_list_{ξ_a}.pickle") and not os.path.exists(f"data/e175_tilde_list_{ξ_a}.pickle"):
    v175_list = {}
    e175_tilde_list = {}
    for ξ_r, ξ_a in [(100_000, 100_000), (5., 0.01), (1., 0.01), (0.3, 0.01)]:
        model_args_list = []
        for γ_3_m in γ_3:
            model_arg = (η, δ, σ_y, y_bar_higher, γ_1, γ_2, γ_3_m, θ_list, πc_o,
                         ξ_r, ξ_a)
            model_args_list.append(
                (y_grid_long, model_arg, None, 1., 1e-8, 5_000, False))
        v175_list[ξ_r], e175_tilde_list[ξ_r] = solve_post_jump(
            y_grid_long, γ_3, solve_hjb_y, model_args_list)
    pickle.dump(v_list, open(f"data/v175_list_{ξ_a}.pickle", "wb"))
    pickle.dump(e_tilde_list, open(f"data/e175_tilde_list_{ξ_a}.pickle", "wb"))
v175_list = pickle.load(open(f"data/v175_list_{ξ_a}.pickle", "rb"))
e175_tilde_list = pickle.load(open(f"data/e175_tilde_list_{ξ_a}.pickle", "rb"))


# pre jump value function
if not os.path.exists(f"data/pre_jump175_res_{ξ_a}.pickle"):
    pre_jump175_res = {}
    ξ_r_list = [100_000, 5., 1., 0.3]
    for ξ_r_i in ξ_r_list:
        ϕ_list = v175_list[ξ_r_i]
        certainty_equivalent = -ξ_r_i * np.log(
            np.average(
                np.exp(-1. / ξ_r_i * np.array(ϕ_list)), axis=0, weights=πd_o))
        # Change grid from 0-4 to 0-2
        ϕ_i = np.array(
            [temp[n_bar] * np.ones_like(y_grid_short_2) for temp in ϕ_list])

        # Compute ϕ with jump (impose boundary condition)
        if ξ_r_i == 100_000:
            ξ_a = 100_000
        else:
            ξ_a = 0.01
        model_args = (η, δ, σ_y, y_underline_higher, y_bar_higher, γ_1, γ_2, γ_3,
                      θ_list, πc_o, ϕ_i, πd_o, ξ_r_i, ξ_r_i, ξ_a)
        model_res = solve_hjb_y_jump(y_grid_short_2,
                                     model_args,
                                     v0=None,
                                     ϵ=1.,
                                     tol=1e-8,
                                     max_iter=5_000,
                                     print_iteration=False)
        simulation_res = no_jump_simulation(model_res, dt=1 / 4)
        pre_jump175_res[ξ_r_i] = dict(model_res=model_res,
                                      simulation_res=simulation_res,
                                      certainty_equivalent=certainty_equivalent)
    pickle.dump(pre_jump175_res, open(f"data/pre_jump175_res_{ξ_a}.pickle", "wb"))
pre_jump175_res = pickle.load(open(f"data/pre_jump175_res_{ξ_a}.pickle", "rb"))

# code for plot of emissions
from src.plots import plot_e_tilde
plot_e = go.Figure()
plot_e = plot_e_tilde(plot_e, pre_jump_res, y_grid_short_2, y_underline)
plot_e = plot_e_tilde(plot_e, pre_jump175_res, y_grid_short_2, y_underline_higher)
for i in range(len(ξ_r_list)):
    plot_e.data[i]["visible"] = True
    plot_e.data[i]["showlegend"] =True
buttons = []
# for
plot_e.update_layout(
    title=
    r"""Figure 8 : Emissions as a function of the temperature anomaly.
   """
)
for i, (y_u, y_o) in enumerate([(y_underline, y_bar), (y_underline_higher, y_bar_higher)]):
    # Hide all traces
    button = dict(method='update',
                args=[
                    {
                        'visible': [False] * (len(ξ_r_list)*2),
                        'showlegend': [False] * (len(ξ_r_list)*2),
                    },
                ],
                label=r'y̲ = {} and ȳ = {}'.format(y_u, y_o))
    # Enable the two traces we want to see
    for j in range(len(ξ_r_list)):
        button['args'][0]["visible"][i*len(ξ_r_list) + j] = True
        button['args'][0]["showlegend"][i*len(ξ_r_list) + j] = True
    # Add step to step list
    buttons.append(button)

plot_e.update_layout(
    updatemenus=[
        dict(
            type="buttons",
            direction="right",
            active=0,
            x=0.80,
            y=1.15,
            buttons=buttons,
            pad={"r": 10, "t": 10, "b":10},
            showactive=True
        )
    ])

Figure 8

As we see, even in advance of gaining more information about damage function curvature, the fictitious planner embraces a substantial level of precaution due to the concerns about the unknown future damage state. In light of uncertainty concerns, the control law for emissions is about twenty percent lower when \(\xi_r = 1\) than the control law based solely on the baseline probabilities. We also see that, as the value of ξr is decreased, the caution is amplified and the choice of emissions is lowered even further. It follows that the emission trajectories for the lower control laws necessarily reach the \(\underline y = 1.5\) threshold later starting from a common initial condition (plot for control over time is provided below).

While the 1.5 and 2 degree thresholds have dominated much of the policy discussion, there is debate as to the extent to which these are firmly backed up by evidence. For this reason, we also report the consequences of shifting the thresholds we use in our computations to \(\underline y = 1.75\) and \(\overline y = 2.25\) in the plot above. The results for emissions are very similar, except that in comparison to \(\underline y = 1.5\) and \(\overline y = 2\) , the control laws are shifted to the right as should be expected because of delay in when the more extreme damage function curvature is manifested.

The figure shows \(\log SCC\) as a function of temperature anomaly:

\[\log SCC = \log C_0 - \log N - \log E + \log \eta - \log (1 - \eta)\]

where \(C_0\) is the current level of output. In this way, we can focus on the impact of emission and damage.

from src.plots import plot_logscc
args_scc = (α, η, i_over_k, K0, γ_1, γ_2)
fig_logscc = go.Figure()
fig_logscc = plot_logscc(fig_logscc, pre_jump_res, y_grid_short, 1.5, ξ_r_list, args_scc)
fig_logscc = plot_logscc(fig_logscc, pre_jump175_res, y_grid_short_2, 1.75, ξ_r_list, args_scc)
for i in range(len(ξ_r_list)):
    fig_logscc.data[i + len(ξ_r_list)]["visible"] = False
    fig_logscc.data[i + len(ξ_r_list)]["showlegend"] = False
buttons = []
# for
fig_logscc.update_layout(title=r"$\text{Figure 9: }\log SCC \text{ as a function of the temperature anomaly.}$")
for i, (y_u, y_o) in enumerate([(y_underline, y_bar), (y_underline_higher, y_bar_higher)]):
    # Hide all traces
    button = dict(method='update',
                args=[
                    {
                        'visible': [False] * (len(ξ_r_list)*2),
                        'showlegend': [False] * (len(ξ_r_list)*2),
                    },
                ],
                label=r'y̲ = {} and ȳ = {}'.format(y_u, y_o))
    # Enable the two traces we want to see
    for j in range(len(ξ_r_list)):
        button['args'][0]["visible"][i*len(ξ_r_list) + j] = True
        button['args'][0]["showlegend"][i*len(ξ_r_list) + j] = True
    # Add step to step list
    buttons.append(button)
fig_logscc.update_layout(
    updatemenus=[
        dict(
            type="buttons",
            direction="right",
            active=0,
            x=0.80,
            y=1.15,
            buttons=buttons,
            pad={"r": 10, "t": 20, "b":20},
            showactive=True
        )
    ])
fig_logscc.update_yaxes(range=[4.4, 6])

Figure 9

In the plot above, we see substantial values of the \(\log SCC\) in each case. The magnitudes are amplified as we increase concerns about damage function misspecification (by decreasing the value of \(\xi_r\)). In particular, as a function of the temperature anomaly, the SCC for \(\xi_r = 1\) is between and twenty and thirty percent higher than when we abstract from robustness concerns. Changing the thresholds to be \(\underline y = 1.75\) and \(\overline y = 2.25\) effectively shifts the curves to the right with a corresponding smaller SCC at the initial temperature anomaly of \(y = 1.1\).

Temperature anomalies

from src.simulation import EvolutionState
from scipy import interpolate

e_grid_1 = pre_jump_res[1]["model_res"]["e_tilde"]
e_func_pre_damage = interpolate.interp1d(y_grid_short, e_grid_1)
e_grid_long_1 = e_tilde_list[1]
e_func_post_damage = [interpolate.interp1d(y_grid_long, e_grid_long_1[i]) for i in range(len(γ_3))]

# start simulation
e0 = 0
y0 = 1.1
temp_anol0 = 1.1
y_underline = 1.5
y_overline = 2.
initial_state = EvolutionState(t=0,
                               prob=1,
                               damage_jump_state='pre',
                               damage_jump_loc=None,
                               variables=[e0, y0, temp_anol0],
                               y_underline=y_underline,
                               y_overline=y_overline)

fun_args = (e_func_pre_damage, e_func_post_damage)

T = 400
sim_res = []
temp_anols = []
probs = []
damage_locs = []
sim_res.append([initial_state])
temp_anols.append([1.1])
probs.append([1])
damage_locs.append([None])
for i in range(T):
    if i == 0:
        states = initial_state.evolve(np.mean(θ_list), fun_args)
    else:
        temp = []
        for state in states:
            temp += state.evolve(np.mean(θ_list), fun_args)
        states = temp
    tempanol_t = []
    probs_t = []
    damage_loc_t = []
    for state in states:
        tempanol_t.append( state.variables[2] )
        probs_t.append( state.prob )
        damage_loc_t.append( state.damage_jump_loc )

    temp_anols.append(tempanol_t)
    probs.append(probs_t)
    damage_locs.append(damage_loc_t)
    sim_res.append(states)

# plot for emission
import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Scatter(x=γ_3, y=[state.variables[0] for state in sim_res[233][:21]]))
fig.update_xaxes(range=[- 0.01, 1./3], showline=True, title=r"$\gamma_3$")
fig.update_yaxes(title="Emissions", range=[0, 10])

A smoother version of the relation between \(\gamma_3\) and corresponding emissions at the jump date is also presented below as the red line.

# Prepare ϕ conditional on low, high, extreme damage
ξ_r_dense = 1
ξ_a_dense = 0.01
γ_3_dense = np.linspace(0, 1/3, 100)
if not os.path.exists("data/ems_jump_date_dense.npy"):
    model_args_list = []
    for γ_3_m in γ_3_dense:
        model_arg = (η, δ, σ_y, y_bar, γ_1, γ_2, γ_3_m, θ_list, πc_o, ξ_r_dense, ξ_a_dense)
        model_args_list.append((y_grid_long, model_arg, None, 1., 1e-8, 2_000, False))
    v_list_dense, e_tilde_dense = solve_post_jump(y_grid_long, γ_3_dense, solve_hjb_y, model_args_list)

    loc_2 = np.abs(y_grid_short - 2.).argmin()
    ems_jump_date_dense = np.zeros(len(γ_3_dense))
    for i in range(len(γ_3_dense)):
        ems_jump_date_dense[i] = e_tilde_dense[i][loc_2]
    np.save("data/ems_jump_date_dense", ems_jump_date_dense)
ems_jump_date_dense = np.load("data/ems_jump_date_dense.npy")

# code for smoother emission plot
fig = go.Figure()
fig.add_trace(go.Scatter(x=γ_3_dense, y=ems_jump_date_dense))
fig.update_xaxes(range=[-0.01, 1./3], showline=True, title=r"$\gamma_3$")
fig.update_yaxes(title="Emissions", range=[0, 10])
fig.update_layout(title="Figure 10: Emissions choices, conditioned on a jump having occurred, for different γ3 upon realization of the jump.")

Figure 10

We report the initial emissions, post-jump, in the plot above as a function of \(\gamma_3\) governing the curvature of the damage function for large temperature anomalies. Importantly, this function is highly convex. The realization of a very low damage function curvature is good news for the planner, resulting in an increase in emissions in contrast to many of the other damage function specifications that could be realized. For the damage functions with even a little more curvature, there is a large reduction in emissions as reflected in the steep slope of the function of optimal emissions choices for small values of \(\gamma_3\). The emissions choices are increasingly more concentrated at similar values for higher curvature, as seen in the much flatter slope for the larger values of \(\gamma_3\).

temperatur_anomaly = [state.variables[2] for state in sim_res[-1]]
probs = [state.prob for state in sim_res[-1]]
fig = go.Figure()
fig.add_trace(go.Histogram(x=temperatur_anomaly[:-1], y=probs[:-1]/np.sum(probs[:-1]),
                           histnorm="probability",
                           nbinsx=40,
                           marker=dict( color="red",line=dict(color='grey', width=1)),
                           opacity=0.5
                          ))
fig.update_xaxes(title="Temperature anomaly")
fig.update_yaxes(title="Probability", showline=True)

Emission trajectories over time

The following figure shows emissions over time before temperature anomaly reaches the lower bound of jump threshold, 1.5 \(^o C\).

The plot indicates that for different values of uncertainty parameter, \(\xi_r\), emission reaches the lower bound of jump thresholds in different years.

For our choice of uncertainty parameters, the damage function jumps will not be crossed in fifty to seventy years.

This next plot show the emission trajectories conditioning on no jump occurs. The emission trajectories stop when the temperature anomaly reaches 2 \(^o C\).

from plotly.subplots import make_subplots
fig = make_subplots(cols=2, rows=2, subplot_titles=("No jump probability", "Temperature anomaly",
                                                    "Emissions", "Temperature anomaly"))
dt = 1/4
yt = pre_jump_res[1]["simulation_res"]["yt"]
et = pre_jump_res[1]["simulation_res"]["et"]
T_jump = np.abs(yt - 1.5).argmin()
T_stop = 400
probt = np.insert(pre_jump_res[1]["simulation_res"]["probt"][:-1],
                          0,
                          0,
                          axis=0)
Years = np.arange(0, T_stop * dt + dt, dt)
prob_jump = 1 - np.exp(-np.cumsum(probt))

fig.add_trace(go.Scatter(x=Years, y=prob_jump, showlegend=False), col=1, row=1)
fig.add_trace(go.Scatter(x=Years, y=yt, showlegend=False), col=2, row=1)
fig.update_yaxes(range=[1., 2.1], col=2, row=1)

# jump
for i in range(T_jump+2, T_stop):
    # add lines for temperature
    temp = yt[i]
    prob = prob_jump[i]
    fig.add_trace(go.Scatter(x=np.arange(0, i * dt, dt),
                                 y=temp * np.ones(i),
                                 showlegend=False,
                                 visible=False,
                                 line=dict(dash="dot", color="black")),
                      col=2,
                      row=1)
    temp_horizontal = np.linspace(0, temp, 50)
    fig.add_trace(go.Scatter(x=i * dt * np.ones(len(temp_horizontal)),
                         y=temp_horizontal,
                         showlegend=False,
                         visible=False,
                         line=dict(dash="dot", color="black")),
              col=2,
              row=1)
    # add lines for probability
    fig.add_trace(go.Scatter(x=np.arange(0, i * dt, dt),
                                 y=prob * np.ones(i),
                                 showlegend=False,
                                 visible=False,
                                 line=dict(dash="dot", color="black")),
                      col=1,
                      row=1)
    prob_horizontal = np.linspace(0, prob, 50)
    fig.add_trace(go.Scatter(x=i * dt * np.ones(len(prob_horizontal)),
                         y=prob_horizontal,
                         showlegend=False,
                         visible=False,
                         line=dict(dash="dot", color="black")),
              col=1,
              row=1)
    # histograms
    ems = [state.variables[0] for state in sim_res[i][:-1]]
    probs = [state.prob for state in sim_res[i][:-1]]
    prob = np.array(probs)
    prob = prob / np.sum(prob)
    temp_anomaly = [state.variables[2] for state in sim_res[i][:-1]]
    fig.add_trace(go.Histogram(x=ems,
                           y=prob,
                           histnorm="probability",
                               nbinsx=40,
                               opacity=0.5,
                               visible=False,
                               showlegend=False
                          ), col=1, row=2)
    fig.add_trace(go.Histogram(x=temp_anomaly,
                           y=prob,
                           histnorm="probability",
                               nbinsx=40,
                               opacity=0.5,
                               visible=False,
                               showlegend=False,
                          ), col=2, row=2)


num_fig = len(fig.data)
for i in range(6):
    fig.data[2 + i]["visible"] = True


steps = []
for i in range(T_stop - T_jump - 2):
    step = dict(method='update',
                    args=[
                        {
                            'visible': [False] * len(fig.data)
                        },
                    ],
                    label="Year: {}".format( round((i + T_jump + 2)*dt) ) )
    # Enable the two traces we want to see
    for j in range(6):
        step["args"][0]["visible"][0] = True
        step["args"][0]["visible"][1] = True
        step['args'][0]["visible"][2 + 6 * i + j] = True
    # Add step to step list
    steps.append(step)

sliders = [
        dict(
            active=0,
            steps=steps,
            pad={"t": 70},
        )
]

fig.update_layout(sliders=sliders, font=dict(size=13), plot_bgcolor="white")
fig.update_layout(barmode="overlay")
fig.update_xaxes(showline=True)
fig.update_yaxes(rangemode="tozero", showline=True)
fig.update_yaxes(range=[0,1], col=1, row=1)
fig.update_yaxes(range=[0, 0.5], col=1, row=2)
fig.update_xaxes(showspikes=True, col=1, row=1)
fig.update_yaxes(showspikes=True, col=1, row=1)
fig

By introducing additional concerns about ambiguity, the initial SCC can be as large as 145 dollars per unit of carbon. For more detailed disussion about smooth ambiguity, continue to the following section.

4.3 Smooth ambiguity

This section is to suppliment discussion in previous sections by focusing on a different aspect – impact of smooth ambiguity.

The same with Section 4.2, the penalty paramters are \(\xi_a\) and \(\xi_r\).

Without specifically pointed out, \(\xi_r = 1\) in this example. And the smooth ambiguity parameter \(\xi_a\) values we experiment with are \(\{0.02, 0.01, 0.005, 0.0025\}\).

# Prepare ϕ conditional on 20 different γ_3
ξ_r = 1.
v_list = {}
e_tilde_list = {}
if not os.path.exists("data/v_list_sa.pickle"):
    for ξ_a_i in [0.02, 0.01, 0.005, 0.0025]:
        model_args_list = []
        for γ_3_m in γ_3:
            model_arg = (η, δ, σ_y, y_bar, γ_1, γ_2, γ_3_m, θ_list, πc_o, ξ_r, ξ_a_i)
            model_args_list.append((y_grid_long, model_arg, None, 1., 1e-8, 5_000, False))
        v_list[ξ_a_i], e_tilde_list[ξ_a_i] = solve_post_jump(y_grid_long, γ_3, solve_hjb_y, model_args_list)

    pickle.dump(v_list, open("data/v_list_sa.pickle", "wb"))
    pickle.dump(e_tilde_list, open("data/e_tilde_list_sa.pickle", "wb"))
v_list = pickle.load(open("data/v_list_sa.pickle", "rb"))
e_tilde_list = pickle.load(open("data/e_tilde_list_sa.pickle", "rb"))

# solve for pre jump HJB
ξ_a_list = [0.02, 0.01, 0.005, 0.0025]
if not os.path.exists("data/pre_jump_res_sa.pickle"):
    pre_jump_res = {}
    for ξ_a_i in ξ_a_list:
        ϕ_list = v_list[ξ_a_i]
        certainty_equivalent = -ξ_r * np.log(
            np.average(
                np.exp(-1. / ξ_r * np.array(ϕ_list)), axis=0, weights=πd_o))
        # Change grid from 0-4 to 0-2
        ϕ_i = np.array(
            [temp[n_bar] * np.ones_like(y_grid_short) for temp in ϕ_list])

        # Compute ϕ with jump (impose boundary condition)
        model_args = (η, δ, σ_y, y_underline, y_bar, γ_1, γ_2, γ_3, θ_list, πc_o, ϕ_i, πd_o,
                      ξ_r, ξ_r, ξ_a_i)
        model_res = solve_hjb_y_jump(y_grid_short,
                                     model_args,
                                     v0=None,
                                     ϵ=1.,
                                     tol=1e-8,
                                     max_iter=5_000,
                                     print_iteration=False)
        simulation_res = no_jump_simulation(model_res, dt=1/4)
        pre_jump_res[ξ_a_i] = dict(model_res=model_res,
                               simulation_res=simulation_res,
                               certainty_equivalent=certainty_equivalent)
    pickle.dump(pre_jump_res, open("data/pre_jump_res_sa.pickle ", "wb"))
pre_jump_res_sa = pickle.load(open("data/pre_jump_res_sa.pickle", "rb"))
pre_jump_res_sa[100_000] = pre_jump_res[100_000]

Emission and \(\log SCC\) as a function of temperature anomaly

Emission and logarithm of social cost of carbon trajectories

The figure shows emission and \(\log SCC\) over time before temperature anomaly reaches the lower bound of jump threshold, 1.5\(^o C\).