We will present here some standard invariants of Brownian motions. The proofs are standard and can be found in for instance \cite{durrett2019probability} and \cite{karatzas1991brownian}.
\begin{lemma}[Markov property of Brownian motions]
Let $T \in\R$, $t \in[0,T]$, and $d \in\N$. Let $\lp\Omega, \mathcal{F}, \mathbb{P}\rp$ be a probability space. Let $\mathcal{W}_t: \lb0, T \rb\times\Omega\rightarrow\R^d$ be a standard Brownian motion. Fix $s\in[0,\infty)$. Let $\mathfrak{W}_t =\mathcal{W}_{s+t}-\mathcal{W}_s$. Then $\mathfrak{W}=\left\{\mathfrak{W}_t : t\in[0,\infty)\right\}$ is also a standard Brownian motion independent of $\mathcal{W}$.
\end{lemma}
\begin{proof}
We check against the Brownian motion axioms. First note that $\mathfrak{W}_0=\mathcal{W}_{s+0}-\mathcal{W}_s =0$ with $\mathbb{P}$-a.s.
Note that $t\mapsto\mathcal{W}_{s+t}-\mathcal{W}_s$ is $\mathbb{P}$-a.s. continuous as it is the difference of two functions that are also $\mathbb{P}$-a.s. continuous.
Note next that for $h\in\lp0,\infty\rp$ it is the case that:
Finally note that two stochastic processes $\mathcal{W}$, $\mathcal{X}$ are independent whenever given a set of sample points $t_1,t_2,\hdots, t_n \in\lb0,T\rb$ it is the case that the vectors $\lb\mathcal{W}_{t_1}, \mathcal{W}_{t_2},\hdots, \mathcal{W}_{t_n}\rb^\intercal$ and $\lb\mathcal{X}_{t_1},\mathcal{X}_{t_2},\hdots, \mathcal{X}_{t_n}\rb^\intercal$ are independent vectors.
That being the case note that the independent increments property of Brownian motions yields that, $\mathcal{W}_{s+t_1}-\mathcal{W}_s$, $\mathcal{W}_{s+t_2}-\mathcal{W}_s, \hdots, \mathcal{W}_{s+t_n}-\mathcal{W}_s$ is independent of $\mathcal{W}_{t_1},\mathcal{W}_{t_2},\hdots, \mathcal{W}_{t_n}$, i.e. $\mathfrak{W}$ and $\mathcal{W}$ are independent.
\end{proof}
\begin{lemma}[Independence of Brownian Motion]\label{iobm}
Let $T \in\lp0,\infty\rp$. Let $\lp\Omega, \mathcal{F}, \mathbb{P}\rp$ be a probability space. Let $\mathcal{X}, \mathcal{Y}: \lb0,T\rb\times\Omega\rightarrow\R^d$ be standard Brownian motions. It is then the case that they are independent of each other.
\end{lemma}
\begin{proof}
We say that two Brownian motions are independent of each of each other if given a sampling vector of times $\lp t_1,t_2,\hdots,t_n\rp$, the vectors $\lp\mathcal{X}_{t_1}, \mathcal{X}_{t_2},\hdots\mathcal{X}_{t_n}\rp$ and $\lp\mathcal{Y}_{t_1}, \mathcal{Y}_{t_2},\hdots, \mathcal{Y}_{t_n}\rp$ are independent. As such let $n\in\N$ and let $\lp t_1,t_2,\hdots t_n \rp$ be a vector or times with samples as given above. Consider now a new Brownian motion $\mathcal{X}-\mathcal{Y}$, wherein our samples are now $\lp\mathcal{X}_{t_1}-\mathcal{Y}_{t_1}, \mathcal{X}_{t_2}-\mathcal{Y}_{t_2}, \hdots, \mathcal{X}_{t_n}-\mathcal{Y}_{t_n}\rp$. By the independence property of Brownian motions, these increments must be independent of each other. Whence it is the case that the vectors $\lp\mathcal{X}_{t_1}, \mathcal{X}_{t_2},\hdots, \mathcal{X}_{t_n}\rp$ and $\lp\mathcal{Y}_{t_1}, \mathcal{Y}_{t_2},\hdots, \mathcal{Y}_{t_n}\rp$ are independent.
\end{proof}
\begin{lemma}[Scaling Invariance]
Let $T \in\R$, $t \in[0,T]$, and $d \in\N$. Let $\lp\Omega, \mathcal{F}, \mathbb{P}\rp$ be a probability space. Let $\mathcal{W}_t: \lb0, T \rb\times\Omega\rightarrow\R^d$ be a standard Brownian motion. Let $a \in\R\setminus\{0\}$. It is then the case that $\mathcal{X}_t \coloneqq\frac{1}{a}\mathcal{W}_{a^2\cdot t}$ is also a standard Brownian motion.
\end{lemma}
\begin{proof}
We check against the Brownian motion axioms. Note for instance that the function $t \mapsto\mathcal{X}_t$ is a product of a constant with a function that is $\mathbb{P}$-a.s. continuous yielding a function that is also $\mathbb{P}$-a.s. continuous.
Note also for instance that $\mathcal{X}_0=\frac{1}{a}\cdot\mathcal{W}_{a^2\cdot0}=0$ with $\mathbb{P}$-a.s.
Note that for all $h \in\lp0,\infty\rp$, and $t\in\lb0,T\rb$ it is the case that:
Finally note that for $t \in\lb0,T\rb$ and $s \in\lb0,t\rp$ it is the case that $\mathcal{W}_{a^2\cdot t}-\mathcal{W}_{a^2\cdot s}$ is independent of $\mathcal{W}_{a^2\cdot s}$. Whence it is also the case that $\mathcal{X}_t-\mathcal{X}_s$ is independent of $\mathcal{X}_s$.
\end{proof}
\begin{lemma}[Summation of Brownian Motions]
Let $T \in\R$, $t \in[0,T]$ and $d \in\N$. Let $\lp\Omega, \mathcal{F}, \mathbb{P}\rp$ be a probability space. Let $\mathcal{W}_t, \mathcal{X}_t: \lb0,T \rb\times\Omega\rightarrow\R^d$ be a standard independent Brownian motions. It is then the case that the process $\mathcal{Y}_t$ defined as $\mathcal{Y}_t =\frac{1}{\sqrt{2}}\lp\mathcal{W}_t +\mathcal{X}_t \rp$ is also a standard Brownian motion.
\end{lemma}
\begin{proof}
Note that $t \mapsto\frac{1}{\sqrt{2}}\lp\mathcal{W}_t+\mathcal{X}_t\rp$ is $\mathbb{P}$-a.s. continuous as it is the linear combination of two functions that are also $\mathbb{R}$-a.s. continuous.
Note also that $\mathcal{Y}_0=\frac{1}{\sqrt{2}}\lp\mathcal{W}_0+\mathcal{X}_0\rp=0+0=0$ with $\mathbb{P}$-a.s.
Note that for all $h \in\lp0,\infty\rp$ and $t \in\lb t,T\rb$ it is the case that:
Let $p \in[2,\infty)$. We denote by $\mathfrak{k}_p \in\R$ the real number given by $\mathfrak{k}:=\inf\{ c\in\R\}$ where it holds that for every probability space $(\Omega, \mathcal{F}, \mathbb{P})$ and every random variable $\mathcal{X}: \Omega\rightarrow\R$ with $\E[|\mathcal{X}|] < \infty$ that $\lp\E\lb\lv\mathcal{X}-\E\lb\mathcal{X}\rb\rp^p \rb\rp^{\frac{1}{p}}\leqslant c \lp\E\lb\lv\mathcal{X}\rv^p \rb\rp^{\frac{1}{p}}.$
\end{definition}
\begin{definition}[Primary Setting]\label{primarysetting} Let $d,m \in\mathbb{N}$, $T, \mathfrak{L},p \in[0,\infty)$, $\mathfrak{p}\in[2,\infty)$$\mathfrak{m}=\mathfrak{k}_{\mathfrak{p}}\sqrt{\mathfrak{p}-1}$, $\Theta=\mathbb{Z}$, $g \in C(\mathbb{R}^d,\mathbb{R})$, assume for all $t \in[0,T],x\in\mathbb{R}^d$ that:
and let $(\Omega, \mathcal{F},\mathbb{P})$ be a probability space. Let $\mathcal{W}^{\theta}: [0,T]\times\Omega\rightarrow\mathbb{R}^d$, $\theta\in\Theta$ be independent standard Brownian motions, let $u \in C([0,T]\times\mathbb{R}^d,\mathbb{R})$ satisfy for all $t \in[0,T]$, $x\in\mathbb{R}^d$, that $\mathbb{E}[|g(x+\mathcal{W}^0_{T-t})|] < \infty$ and:
\begin{align}\label{(1.12)}
u(t,x) &= \mathbb{E}\lb g \lp x+\mathcal{W}^0_{T-t}\rp\rb
\end{align}
and let let $U^\theta:[0,T]\times\mathbb{R}^d \times\Omega\rightarrow\mathbb{R}$, $\theta\in\Theta$ satisfy, $\theta\in\Theta$, $t \in[0,T]$, $x\in\mathbb{R}^d$, that:
\item it holds for all $n\in\N_0$, $\theta\in\Theta$ that $U^\theta:[0,T]\times\mathbb{R}^d\times\Omega\rightarrow\mathbb{R}$ is a continuous random field.
\item it holds that for all $\theta\in\Theta$ that $\sigma\lp U^\theta\rp\subseteq\sigma\lp\lp\mathcal{W}^{(\theta, \mathcal{V})}\rp_{\mathcal{V}\in\Theta}\rp$.
\item it holds that $\lp U^\theta\rp_{\theta\in\Theta}$,$\lp\mathcal{W}^\theta\rp_{\theta\in\Theta}$, are independent.
\item it holds for all $n,m \in$, $i,k,\mathfrak{i},\mathfrak{k}\in\mathbb{Z}$, with $(i,k)\neq(\mathfrak{i},\mathfrak{k})$ that $(U^{(\theta,i,k)})_{\theta\in\Theta}$ and $\left(U^{(\theta,\mathfrak{i},\mathfrak{k})}\right)_{\theta\in\Theta}$ are independent and,
\item it holds that $\lp U^\theta\rp_{\theta\in\Theta}$ are identically distributed random variables.
\end{enumerate}
\end{lemma}
\begin{proof} For (i) Consider that $\mathcal{W}^{(\theta,0,-k)}_{T-t}$ are continuous random fields and that $g\in C(\mathbb{R}^d,\mathbb{R})$, we have that $U^\theta(t,x)$ is the composition of continuous functions with $m > 0$ by hypothesis, ensuring no singularities. Thus $U^\theta: [0,T]\times\mathbb{R}^d\times\Omega\rightarrow\mathbb{R}$.
\medskip
For (ii) observe that for all $\theta\in\Theta$ it holds that $\mathcal{W}^\theta$ is $\mathcal{B}\lp\lb0, T \rb\otimes\sigma\lp W^\theta\rp\rp/\mathcal{B}\lp\mathbb{R}^d \rp$-measurable, this, and induction on prove item (ii).
\medskip
Moreover observe that item (ii) and the fact that for all $\theta\in\Theta$ it holds that $\lp\mathcal{W}^{\lp\theta, \vartheta\rp}_{\vartheta\in\Theta}\rp$, $\mathcal{W}^\theta$ are independently establish item (iii).
\medskip
Furthermore, note that (ii) and the fact that for all $i,k,\mathfrak{i},\mathfrak{k}\in\mathbb{Z}$, $\theta\in\Theta$, with $(i,k)\neq(\mathfrak{i},\mathfrak{k})$ it holds that $\lp\mathcal{W}^{\lp\theta, i,k,\vartheta\rp}\rp_{\vartheta\in\Theta}$ and $\lp\mathcal{W}^{\lp\theta,\mathfrak{i},\mathfrak{k},\vartheta\rp}\rp_{\vartheta\in\Theta}$ are independent establish item (iv).
\medskip
Hutzenhaler \cite[Corollary~2.5 ]{hutzenthaler_overcoming_2020} establish item (v). This completes the proof of Lemma 1.1.
\end{proof}
\begin{lemma}\label{lem:1.20} Assume Setting \ref{primarysetting}. Then it holds for $\theta\in\Theta$, $s \in[0,T]$, $t\in[s,T]$, $x\in\mathbb{R}^d$ that:
\begin{align}
\mathbb{E}\lb\lv U^\theta\lp t,x+\mathcal{W}^\theta_{t-s}\rp\rv\rb +\mathbb{E}\lb\lv g \lp x+\mathcal{W}^\theta_{t-s}\rp\rv\rb + \int^T_s \E\lb\lv U^\theta\lp r,x+\mathcal{W}^\theta_{r-s}\rp\rv\rb dr < \infty
\end{align}
\end{lemma}
\begin{proof}
Note that (\ref{(2.1.2)}), the fact that for all $r,a,b \in[0,\infty)$ it holds that $(a+b)^r \leqslant2^{\max\{r-1,0\}}(a^r+b^r)$, and the fact that for all $\theta\in\Theta$ it holds that $\mathbb{E}\lb\|\mathcal{W}^\theta_T\|\rb < \infty$, assure that for all $s \in[0,T]$, $t\in[s,T]$, $\theta\in\Theta$ it holds that:
We next claim that for all $s\in[0,T]$, $t\in[s,T]$, $\theta\in\Theta$ it holds that:
\begin{align}\label{(1.17)}
\mathbb{E}\lb\lv U^\theta\lp t,x+\mathcal{W}^\theta_{t-s}\rp\rv\rb+ \int^T_s \mathbb{E}\lb\lv U^\theta\lp r,x+\mathcal{W}^\theta_{r-s}\rp\rv\rb dr < \infty
\end{align}
To prove this claim observe the triangle inequality and (\ref{(2.1.4)}), demonstrate that for all $s\in[0,T]$, $t\in[s,T]$, $\theta\in\Theta$, it holds that:
\begin{align}\label{(1.18)}
\mathbb{E}\lb\lv U^\theta\lp t,x+\mathcal{W}^\theta_{t-s}\rp\rv\rb\leqslant\frac{1}{m}\left[ \sum^{m}_{i=1}\mathbb{E}\lb\lv g \lp x+\mathcal{W}^\theta_{t-s}+\mathcal{W}^{(\theta,0,-i)}_{T-t}\rp\rv\rb\rb
\end{align}
Now observe that (\ref{(2.1.6)}) and the fact that $(W^\theta)_{\theta\in\Theta}$ are independent imply that for all $s \in[0,T]$, $t\in[s,T]$, $\theta\in\Theta$, $i\in\mathbb{Z}$ it holds that:
\begin{align}\label{(1.19)}
\mathbb{E}\lb\lv g \lp x+\mathcal{W}^\theta_{t-s}+\mathcal{W}^{(\theta,0,i)}_{T-t}\rp\rv\rb = \mathbb{E}\lb\lv g \lp x+\mathcal{W}^\theta_{(t-s)+(T-t)}\rp\rv\rb = \mathbb{E}\lb\lv g \lp x+\mathcal{W}^\theta_{T-s}\rp\rv\rb <\infty
\end{align}
\medskip
Combining (\ref{(1.18)}) and (\ref{(1.19)}) demonstrate that for all $s \in[0,T]$, $t\in[s,T]$, $\theta\in\Theta$ it holds that:
\begin{lemma}\label{lem:1.21}Let $p \in(2,\infty)$, $n\in\mathbb{N}$, let $(\Omega, \mathcal{F}, \mathbb{P})$, be a probability space and let $\mathcal{X}_i: \Omega\rightarrow\mathbb{R}$, $i \in\{1,2,...,n\}$ be i.i.d. random variables with $\mathbb{E}[|\mathcal{X}_1|]<\infty$. Then it holds that:
This combined with the fact that for all $i \in\{1,2,...,n\}$ it is the case that $\mathcal{X}_i: \Omega\rightarrow\R$ are i.i.d. random variables and e.g. \cite[Theorem~2.1]{rio_moment_2009} (with $p \curvearrowleft p$, $( S_i )_{i \in\{0,1,...,n\}}\curvearrowleft(\sum^i_{k=1}(\E[ X_k ]- X_k))$, $( X_i )_{i \in\{1,2,...,n\}}\curvearrowleft(\E[ X_i ]- X_i )_{i \in\{1,2,...,n\}}$ in the notation of \cite[Theorem~2.1]{rio_moment_2009} ensures that:
Let $p\in[2,\infty)$, $n \in\N$, let $\lp\Omega, \mathcal{F}, \mathbb{P}\rp$ be a probability space, and let $\mathcal{X}_i: \Omega\rightarrow\R$, $i \in\{1,2,...,n\}$ be i.i.d random variables with $\E\lb\lv\mathcal{X}_1\rv\rb < \infty$. Then it holds that:
Let $p \in[2,\infty)$, $n\in\N$, let $(\Omega, \mathcal{F}, \mathbb{P})$, be a probability space, and let $\mathcal{X}_i: \Omega\rightarrow\R$, $i \in\{1,2,...,n\}$, be i.i.d. random variables with $\E[|\mathcal{X}_1|] < \infty$, then:
&\leqslant\frac{\mathfrak{m}}{m^{\frac{1}{2}}}\left[\left(\E\left[ \lv g \lp x+\mathcal{W}^0_T \rp \rv^\mathfrak{p}\right]\right)^{\frac{1}{\mathfrak{p}}}\right]
\end{align}
\end{lemma}
\begin{proof} For notational simplicity, let $G_k: [0,T]\times\mathbb{R}^d \times\Omega\rightarrow\mathbb{R}$, $k\in\mathbb{Z}$, satisfy for all $k\in\mathbb{Z}$, $t\in[0,T]$, $x\in\mathbb{R}^d$ that:
Observe that the hypothesis that $(\mathcal{W}^\theta)_{\theta\in\Theta}$ are independent Brownian motions and the hypothesis that $g \in C(\mathbb{R}^d,\mathbb{R})$ assure that for all $t \in[0,T]$,$x\in\mathbb{R}^d$ it holds that $(G_k(t,x))_{k\in\mathbb{Z}}$ are i.i.d. random variables. This and Corollary \ref{cor:1.22.2} (applied for every $t\in[0,T]$, $x\in\mathbb{R}^d$ with $p \curvearrowleft\mathfrak{p}$, $n \curvearrowleft m$, $(X_k)_{k\in\{1,2,...,m\}}\curvearrowleft(G_k(t,x))_{k\in\{1,2,...,m\}}$), with the notation of Corollary \ref{cor:1.22.2} ensure that for all $t\in[0,T]$, $x \in\mathbb{R}^d$, it holds that:
U^\theta(t+\mft,x) = \frac{1}{m}\left[\sum^{m}_{k=1} g \left(x+\mathcal{W}^{(\theta,0,-k)}_{T-(t+\mft)}\right)\right] = \frac{1}{m}\left[\sum^{m}_{k=1} g \left(x+\mathcal{W}^{(\theta,0,-k)}_{(T-\mft)-t}\right)\right]
\end{align*}
\medskip
Then, applying Lemma \ref{lem:1.25}, applied for all $\mft\in[0,T]$, with $\mathfrak{L}\curvearrowleft\mathfrak{L}$, $p \curvearrowleft p$, $\mathfrak{p}\curvearrowleft\mathfrak{p}$, $T \curvearrowleft(T-\mft)$ is such that for all $\mft\in[0,T]$, $t \in[0,T-\mft]$, $x \in\R^d$ we have:
This completes the proof of Corollary \ref{cor:1.25.1}.
\end{proof}
\begin{theorem}\label{tentpole_1} Let $T,L,p,q, \mathfrak{d}\in[0,\infty), m \in\mathbb{N}$, $\Theta=\bigcup_{n\in\mathbb{N}}\Z^n$, let $g_d\in C(\R^d,\R)$, and assume that $d\in\N$, $t \in[0,T]$, $x =(x_1,x_2,...,x_d)\in\R^d$, $v,w \in\R$ and that $\max\{ |g_d(x)|\}\leqslant Ld^p \left(1+\Sigma^d_{k=1}\left|x_k \right|\right)$, let $\left(\Omega, \mathcal{F}, \mathbb{P}\right)$ be a probability space, let $\mathcal{W}^{d,\theta}: [0,T]\times\Omega\rightarrow\R^d$, $d\in\N$, $\theta\in\Theta$, be independent standard Brownian motions, assume for every $d\in\N$ that $\left(\mathcal{W}^{d,\theta}\right)_{\theta\in\Theta}$ are independent, let $u_d \in C([0,T]\times\R^d,\R)$, $d \in\N$, satisfy for all $d\in\N$, $t\in[0,T]$, $x \in\R^d$ that $\E\left[g_x \left(x+\mathcal{W}^{d,0}_{T-t}\right)\right] < \infty$ and:
and for every $d,n,m \in\N$ let $\mathfrak{C}_{d,n,m}\in\Z$ be the number of function evaluations of $u_d(0,\cdot)$ and the number of realizations of scalar random variables which are used to compute one realization of $U^{d,0}_m(T,0): \Omega\rightarrow\R$.
There then exists $c \in\R$, and $\mathfrak{N}:\N\times(0,1]\rightarrow\N$ such that for all $d \in\N$, $\varepsilon\in(0,1]$ it holds that:
\begin{proof} Throughout the proof let $\mathfrak{m}_\mathfrak{p}=\sqrt{\mathfrak{p}-1}$, $\mathfrak{p}\in[2,\infty)$, let $\mathbb{F}^d_t \subseteq\mathcal{F}$, $d\in\N$, $t\in[0,T]$ satisfy for all $d \in\N$, $t\in[0,T]$ that:
\begin{align}\label{2.3.29}
\mathbb{F}^d_t = \begin{cases}
\bigcap_{s\in[t,T]}\sigma\left(\sigma\left(W^{d,0}_r: r \in [0,s]\right) \cup\{A\in\mathcal{F}: \mathbb{P}(A)=0\}\right) & :t<T \\
\sigma\left(\sigma\left(W^{d,0}_s: s\in [0,T]\right) \cup\{ A \in\mathcal{F}: \mathbb{P}(A)=0\}\right) & :t=T
\end{cases}
\end{align}
Observe that (\ref{2.3.29}) guarantees that $\mathbb{F}^d_t \subseteq\mathcal{F}$, $d\in\N$, $t\in[0,T]$ satisfies that:
\begin{enumerate}[label = (\Roman*)]
\item it holds for all $d\in\N$ that $\{ A \in\mathcal{F}: \mathbb{P}(A)=0\}\subseteq\mathbb{F}^d_0$
\item it holds for all $d \in\N$, $t\in[0,T]$, that $\mathbb{F}^d_t =\bigcap_{s \in(t,T]}\mathbb{F}^d_s$.
\end{enumerate}
Combining item (I), item (II), (\ref{2.3.29}) and \cite[Lemma 2.17]{hjw2020} assures us that for all $d \in\N$ it holds that $W^{d,0}:[0, T]\times\Omega\rightarrow\R^d$ is a standard $\left(\Omega, \mathcal{F}, \mathbb{P}, \left(\mathbb{F}^d_t\right)_{t\in[0, T]}\right)$-Brownian Brownian motion. In addition $(58)$ ensures that it is the case that for all $d\in N$, $x\in\R^d$ it holds that $[0,T]\times\Omega\ni(t,\omega)\mapsto x + W^{d,0}_t(\omega)\in\R^d$ is an $\left(\mathbb{F}^d_t\right)_{t\in[0,T]}/\mathcal{B}\left(\R^d\right)$-adapted stochastic process with continuous sample paths.
\medskip
This and the fact that for all $d\in\N$, $t\in[0,T]$, $x\in\R^d$ it holds that $a_d(t,x)=0$, and the fact that for all $d\in\N$, $t \in[0,T]$, $x$,$v\in\R^d$ it holds that $b_d(t,x)v = v$ yield that for all $d \in\N$, $x\in\R^d$ it holds that $[0,T]\times\Omega\ni(t,\omega)\mapsto x+W^{d,0}_t(\omega)\in\R^d$ satisfies for all $t\in[0,T]$ it holds $\mathbb{P}$-a.s. that:
\begin{align}
x+W^{d,0}_t = x + \int^t_0 0 ds + \int^t_0 dW^{d,0}_s = x + \int^t_0 a_d(s,x+W^{d,0}_s) ds + \int^t_0 b_d(s,x+W^{d,0}_s) dW^{d,0}_s
\end{align}
\medskip
This and \cite[Lemma 2.6]{hjw2020} (applied for every $d \in\N$, $x \in\R^d$ with $d \curvearrowleft d$, $m \curvearrowleft d$, $T \curvearrowleft T$, $C_1\curvearrowleft d$, $ C_2\curvearrowleft0$, $\mathbb{F}\curvearrowleft\mathbb{F}^d$, $\xi\curvearrowleft x, \mu\curvearrowleft a_d, \sigma\curvearrowleft b_d, W \curvearrowleft W^{d,0}, X \curvearrowleft\left(\left[0,T\right]\times\Omega\ni(t, \omega)\mapsto x+W^{d,0}_t(\omega)\in\R^d\right)$ in the notation of \cite[Lemma 2.6]{hjw2020} ensures that for all $r\in[0,\infty)$, $d\in\N$, $x\in\R^d$, $t \in[0,T]$ it holds that
This, the triangle inequality, and the fact that for all $v$,$w\in[0,\infty)$, $r\in(0,1]$, it holds that $(v+w)^r \leqslant v^r + w^r$ assure that for all $\p\in[2,\infty)$, $d\in\N$, $x \in\R^d$ it holds that:
Given that for all $d\in\N$, $x \in[-L,L]^d$ it holds that $\left\| x \right\|_E \leqslant Ld^{\frac{1}{2}}$, this demonstrates for all $\p\in[2,\infty)$, $d\in\N$, it holds that:
This and the fact that for all $d \in\N$ and $\ve\in(0,\infty)$ and the fact that $\mathfrak{m}_\p\leqslant2$, it holds that for fixed $L,q, \mathfrak{p}, d,T$ there exists an $\mathfrak{M}_{L,q,\mathfrak{p},d,T}\in\R$ such that $\mathfrak{N}_{d,\epsilon}\geqslant\mathfrak{M}_{L,q,\mathfrak{p},d,T}$ forces:
Thus (\ref{(2.3.33)}) and (\ref{2.3.34}) together proves (\ref{(2.48)}).
Note that $\mathfrak{C}_{d,\mathfrak{N}_{d,\epsilon},\mathfrak{N}_{d,\epsilon}}$ is the number of function evaluations of $u_d(0,\cdot)$ and the number of realizations of scalar random variables which are used to compute one realization of $U^{d,0}_{\mathfrak{N}_{d,\epsilon}}(T,0):\Omega\rightarrow\R$. Let $\widetilde{\mathfrak{N}_{d,\ve}}$ be the value of $\mathfrak{N}_{d,\ve}$ that causes equality in $(\ref{2.3.34})$. In such a situation the number of evaluations of $u_d(0,\cdot)$ do not exceed $\widetilde{\mathfrak{N}_{d,\ve}}$. Each evaluation of $u_d(0,\cdot)$ requires at most one realization of scalar random variables. Thus we do not exceed $2\widetilde{\mathfrak{N}_{d,\epsilon}}$. Thus note that:
%Note that $\mathfrak{C}_{d,\mathfrak{N}_{d,\epsilon},\mathfrak{N}_{d,\epsilon}}$ is the number of function evaluations of $u_d(0,\cdot)$ and the number of realizations of scalar random variables which are used to compute one realization of $U^{d,0}_{\mathfrak{N}_{d,\epsilon}}(T,0):\Omega \rightarrow \R$. In such a situation the number of evaluations of $u_d(0,\cdot)$ do not exceed $\mathfrak{N}_{d,\ve}$. In other words, for a given precision $\ve$ in dimension $d$, the number computations required for evaluating $u_d(0,\cdot)$ is at most:
% Furthermore note that for each evaluation of $u_d$ is associated with an evaluation of $\mathcal{W}_t^{d,(\theta, 0,-k)}$ which requires at most one realization of scalar random variables and thus in turn is also bounded by: