On zeros of Martin-L\"of random Brownian motion

We investigate the sample path properties of Martin-L\"of random Brownian motion. We show (1) that many classical results which are known to hold almost surely hold for every Martin-L\"of random Brownian path, (2) that the effective dimension of zeroes of a Martin-L\"of random Brownian path must be at least 1/2, and conversely that every real with effective dimension greater than 1/2 must be a zero of some Martin-L\"of random Brownian path, and (3) we will demonstrate a new proof that the solution to the Dirichlet problem in the plane is computable.


Background and notation 1.Brownian motion
Heuristically, Brownian motion is the random continuous function resulting from the limit of discrete random walks as the time interval approaches zero. The paths of Brownian motion are considered typical with respect to Wiener measure on a function space, generally C[0, 1], C[0, ∞), or C[I, R n ] for I = [0, 1] or [0, ∞) The Martin-Löf random elements of a function space with respect to Wiener measure are known as Martin-Löf random Brownian motion. Fouché showed that the class of Martin-Löf random Brownian motion is the same as the class of complex oscillations, a class of functions defined by Asarin and Pokrovskii [1] and later investigated to a greater degree by Fouché [5,6,7], Davie and Fouché [3], Kjos-Hanssen, Nerode [16], and Szabados [17].
In this article, we continue the study of Martin-Löf random Brownian motion. We will demonstrate that many classical theorems which hold almost surely hold for every Martin-Löf random Brownian path, we will prove results toward a classification of the effective dimension of the zeroes of Martin-Löf random Brownian motion, and we will demonstrate a new proof that the solution to the Dirichlet problem in the plane is computable.
We will use 2 ω to denote infinite binary strings, which we will sometimes identify with reals on [0, 1]. We denote the space of continuous functions f : [0, 1] → R and f : R ≥0 → R by C[0, 1] and C[R ≥0 ] respectively. For other cases, the space of continuous functions from a set X to a set Y will be denoted by C(X, Y).
Standard (one dimensional) Brownian motion is a real-valued stochastic process {B(t) : t ∈ I} (I = [0, 1] or I = [0, ∞)) where the following hold. First, for any t 0 < t 1 < ... < t n , the increments B(t n ) − B(t n−1 ), B(t n−1 ) − B(t n−2 ), ..., B(t 2 ) − B(t 1 ) are independent random variables. Second, for all t ≥ 0 and h > 0, the increments B(t+h)−B(t) are normally distributed with mean 0 and variance h. Third, B(0) = 0 almost surely, and B is almost surely continuous. These requirements induce a measure on a function space called Wiener measure, and which we will denote by P. The values taken by the random variable B are called sample paths, or simply paths.
It is possible to define Brownian motion starting at any point x at time 0, rather than starting at the origin, in which case we will denote the corresponding measure by P x (in other words, P x (B ∈ A) = P(x + B ∈ A)). When we wish to emphasize that we are talking about standard Brownian motion, we will use P 0 .
We assume that the reader is familiar with algorithmic randomness and Kolmogorov complexity for binary sequences. One can consult the two books [4,21] for a good overview of the subject. Furthermore, we assume some familiarity with Martin-Löf randomness for computable probability spaces. Gács' lecture notes [9] and the two papers [11,12] by Hoyrup and Rojas are the standard references on the subject. Our main reference for the classical theory of Brownian motion is the recent book by Morters and Peres [20].

Effective aspects of Brownian motion
The construction presented here is the Franklin-Wiener series representation of Brownian motion as found in [13].
Classically, it is known that if ξ 0 , ξ 1 , {ξ i,j } i∈N,j<2 i are independent random variables following a normal distribution N (0, 1), then the random variable B defined by  In order to define Martin-Löf randomness for Brownian motion, one needs to make sure that the space of continuous functions C[0, 1] endowed with distance and Wiener measure (denoted P) is a computable probability space.
The computability of (C[0, 1], P) was proven by Fouché and Davie [3,6] (see next subsection for more details). One can take for dense set of points the piecewise linear functions which interpolate between finitely many points of rational coordinates, and for p such a function and r > 0 a rational number, the P-measure of {f | ||f − p|| ∞ < r} is computable uniformly in a code for p.
Therefore, it is possible to define Martin-Löf randomness for Brownian motion in the usual way: the Martin-Löf random elements of (C[0, 1], P) are those which do not belong to the universal Martin-Löf test n U n . To stress the difference between Brownian motion as a stochastic process and Martin-Löf randomness on the space (C[0, 1], P), we will use the cursive letter B for the random variable taking values in C[0, 1] and distributed according to P, and use the letter B for individual elements of C[0, 1]. Recall that we refer to elements B ∈ C[0, 1] as (sample) paths, and therefore we will only talk about Martin-Löf random paths, and not Martin-Löf random Brownian motion.
Note that all of the above can be adapted in a straightforward way to the space C[0, ∞), which by the above correspondence (1) can be identified with ω copies of (C[0, 1], P), endowed with the product measure P ω .

Layerwise computability
Throughout the paper, we will make extensive use of the notion of layerwise computability developed by Hoyrup and Rojas [11,12]. Layerwise computability is a form of uniform relativisation: In computability theory, we often say that an element y is computable in y if y can be computed given x as oracle. We say that an expression F(x) is computable uniformly in x if F is a computable function on the space to which x belongs. There are many examples of this in computable analysis: Layerwise computability is a slightly weaker form of uniformity. First of all when we say that an expression F(x) is layerwise computable, we only ask that it is defined for x Martin-Löf random on the computable probability space X it belongs to (see [11,12] for the definition of computable probability space). Moreover, we only require uniformity on each "layer" of X, uniformly in n. A layer is a set of type K n , where K n is the complement of U n , the n-th level of a universal Martin-Löf test over X. An interesting aspect of layers is that they always are effectively compact, even if the space X itself is not compact. So formally, we say that F(x) is computable layerwise in x if there exists a partial computable function G(., .) such that G(x, n) = F(x) for all x ∈ K n .
Layerwise computability is a very powerful tool to study constructive versions of classical results in probability theory and measure theory (as we shall see in this paper!). Perhaps the most important result using layerwise computability is the socalled 'randomness preservation theorem': 11,12]) Let (X, µ) be a computable probability space and F a layerwise computable function over X taking values in a computable metric space Y. Then: This theorem can for example be used to prove that C[0, 1] with the ||.|| ∞ norm and Wiener measure is a computable probability space (as alluded to in the previous subsection). Indeed, Fouche and Davie proved that the function Φ which maps a sequence of reals ξ 0 , ξ 1 , {ξ i,j } i∈N,j<2 i to the function is layer wise computable from X to (C[0, 1], ||.|| ∞ ), where X is the space of sequences of real numbers where each coordinate is distributed according to the normal distribution N (0, 1) independently of the others. It is obvious that X is a computable probability space. Thus, by the above theorem, the measure induced by Φ on (C[0, 1], ||.|| ∞ ), which we know to be Wiener measure, is a computable measure.
Another important result we will need in several occasions is that one can compute the integral of layerwise computable functions. is computable uniformly in an index of f and a bound for it.

Basic properties of Martin-Löf random paths
We begin by showing that the main "almost sure" properties of classical Brownian motion hold for Martin-Löf random paths.

Scaling theorem
The classical scaling theorem states that the map [20]. For Martin-Löf random paths, we have the following.

Constructive strong Markov property
The strong Markov property of Brownian motion asserts the following. Let T be a stopping time, that is, a random variable in [0, ∞] which is a function of B , and such that deciding whether {T ≤ t} depends only on B ↾ [0, t] (the restriction of B to the interval [0, t]). If T(B) is almost surely finite, then the process B defined by From its classical version, we can derive a constructive version of the strong Markov property which will be very useful in the sequel.

Proposition 2.2
Let T be a layerwise computable stopping time. Then the function Proof Consider the product space C[0, ∞) × C[0, ∞) endowed with the product measure W × W . Consider the map is the concatenation of B 1 up to time T(B 1 ) and then continued according to B 2 : By the strong Markov property, this map is measure preserving and it is layerwise

Continuity properties
In his paper establishing many of the local properties of Martin-Löf random Brownian motion [7], Fouché shows every Martin-Löf random Brownian motion obeys a modulus of continuity φ(h) such that It is possible to extend this result with big-O notation to the particular constant ( √ 2) from the classical result, and moreover, while the classical result demonstrates that the modulus of continuity holds for "sufficiently small" h, we will demonstrate that "sufficiently small" is layerwise computable from a Martin-Löf random path.

Proposition 2.3 Let B be a ML random Brownian motion. Then for all
Proof For a large n (to be specified later), split the interval [0, 1] into chunks of size e −n (omitting the last bit). For each 0 ≤ k < e n , consider the event Note that the A k are independent by definition of Brownian motion and by timetranslation invariance, all have the same probability. Let us estimate the probability of A 0 , which is the event: |B(e −n ) − B(0)| ≥ c √ e −n n. By scaling, it is also equal to the probability of the event: |B(1) − B(0)| ≥ c √ n. By the estimate given in Peres-Morters (Lemma 12.9), we have so, by assumption on c, there exists an α < 1 such that for almost all n Since the A k are independent, Thus for n taken large enough, this can be made arbitrarily small. Moreover, notice that c can be supposed to be computable, which makes the A k Π 0 1 classes, hence the event "no A k happens" corresponds to a Σ 0 1 class. Thus, we have a Solovay test that any Martin-Löf random Brownian motion should pass, and for such a Martin-Löf random Brownian path, there are infinitely many n for which some A k happens. Proposition 2.4 Let B be a ML random Brownian motion. Then for all c > √ 2, there is h 0 , such that for all h < h 0 and all t The proof is the same as that of Mörters and Peres Theorem 1.14 [20], with the addition of keeping track of the layerwise computability of h 0 . We recall the proof for completeness.
We first look at increments over a class of intervals, which is chosen to be sparse, but big enough to approximate arbitrary intervals. More precisely, given n, m ∈ N, we let Λ n (m) be the collection of all intervals of the form We further define Λ(m) := n Λ n (m).

Lemma 2.5
For any fixed m and c > √ 2, for B(t) a Martin-Löf random Brownian motion, there exists n 0 ∈ N, layerwise computable in B(t), such that for any n ≥ n 0 , Proof From the tail estimate for a standard normal variable X , see, for example [20] Lemma 12.9, we obtain Note that c can be taken to be computable, so for fixed m, n ∈ N the event is computable in B(t) and the right hand side of 3 is summable, giving a Solovay test which every Martin-Löf random Brownian motion B(t) will pass.
The standard proof of the equivalence of Solovay randomness and Martin-Löf randomness gives a uniform way of converting a Solovay test See, for example, [4]. Thus knowing a k such that a Martin-Löf random path B(t) ∈ U k gives us an n 0 where the path no longer appears in any S n for n > n 0 . Thus the n 0 given in the proof above is layerwise computable in B.
Proof See [20], Lemma 1.17 Proof of Proposition 2.4 Given c > √ 2, pick 0 < ε < 1 small enough to ensure that c * := c − ε > √ 2 and m ∈ N as in Lemma 2.6. Using Lemma 2.5 we choose n 0 ∈ N large enough that, for all n ≥ n 0 and all intervals [s ′ , t ′ ] ∈ Λ n (m), almost surely, By making ε > 0 small, the first factor on the right can be chosen arbitrarily close to c. This completes the proof of the theorem.

Computability of minimum and maximum
Since a sample path B is almost surely continuous, it almost surely reaches a maximum and a minimum on any given interval. As it turns out, these extremal values can be computed layerwise in B.
is computable uniformly in x, y and layerwise in B. The same is true for the minimum function.
Proof To compute the maximum of B(t) on [x, y] to within ε, we run the following simple algorithm: Pick h 0 small enough so that B(t) obeys a modulus of continuity with constant c = 2 (see Proposition 2.4) and so that 2 h 0 log(1/h 0 ) < ε. Then we know that the maximum of the values B(r 1 ), The minima are also layerwise computable by the same argument.
Note that this argument does not establish the layerwise computability of the time(s) at which the maximum occurs; the best we can say using this argument is that the time(s) are Π 0 1 in B, and the argument uses the randomness deficiency of B and so is not uniform. Proposition 2.8 Local maxima and local minima of a Martin-Löf random Brownian motion are Martin-Löf random reals (in particular, they cannot be computable reals).
Proof Fix two rational numbers x < y. It is known classically that max(B, 0, y) is distributed according to the density function for a ≥ 0, and f (a) = 0 for a < 0 (see [20,Theorem 2.21]). By the Markov property, max(B, x, y) has the same distribution as B(x) + max(B, 0, y − x), and thus is distributed according to the density function for a ≥ 0, and f (a) = 0 for a < 0. It is known that if a computable measure µ on R admits a continuous positive density function, then its random elements are exactly the Martin-Löf random reals (see [12]). Since the function is layerwise computable, its image measure is computable, and by the above has a continuous positive density function. Moreover, by the randomness preservation theorem since the function is layerwise computable, the image of an ML random B is random for the image measure, hence is Martin-Löf random for the uniform measure.
Proof Otherwise 0 would be a local maximum or minimum, which would contradict Proposition 2.8.

Zero sets of Martin-Löf random Brownian motion
In this section, we study the properties of the zero set of Martin-Löf random paths. Once again, we will need some classical results to prove our effective theorems. Most importantly, we will need the next proposition, which gives an exact expression of the probability that a path has a zero in a given interval.
We shall also need the following lemma.
Proof Consider the random variable B consisting of a Brownian motion starting at 0, and form the variable B ′ defined as follows: , then by continuity we have τ < t, and thus B(t) = B ′ (t) = 0. This shows that and the result follows.

The zero set of B is layerwise recursive in B
Following [28, Definition 5.1.1], we say that a closed set C is recursive if the predicate over a pair (a, b) of rationals, is decidable.

Remark 3.3
Note that a recursive closed set is in particular a Π 0 1 class. Not all Π 0 1 classes are recursive. For example, the minimum element of a bounded recursive closed set is necessarily a computable real, a property that not all bounded Π 0 1 subsets of R have. To see this, suppose without loss of generality that all members of C are positive. Then the minimum is lower semicomputable as and upper semicomputable as The main result of this subsection is that the zero set Z B is recursive layerwise in B.
To prove this fact, we first need to show the following proposition. Proof For all k, we know from Proposition 3.1 that the probability for Brownian motion not having a zero on the interval ( which limits to zero, computably, as n → ∞. Moreover, we argued above that not having a zero in a given rational interval is a Σ 0 1 event, thus this gives us a Martin-Löf test (in fact, a Schnorr test), and thus a Martin-Löf random B must have a zero in infinitely many intervals of type (2 −3k , 2 −3k + 2 −k ).

Proposition 3.5 For B
Martin-Löf random, the set Z B does not contain any computable real other than 0.
the multiplicative constant depending on x) and by Corollary 2.8, having a zero in [a k , a k +2 −k ] for a Martin-Löf random Brownian motion is equivalent to having a positive and a negative value on [a k , a k + 2 −k ], which is a Σ 0 1 property. Therefore, this induces a Martin-Löf test, and thus any Martin-Löf random B must have no zero in [a k , a k + 2 −k ] for some k. Theorem 3. 6 For B a Martin-Löf random path, Z B is a non-empty closed set which is recursive layerwise in B.
Let us now prove that Z B is decidable layerwise in B. We need to see how to decide, layerwise in B, whether B has a zero in a rational interval (a, b) with a < b. If a = 0, we know by Proposition 3.4 that answer is necessarily yes, so we can assume a > 0. The first important observation is that, in case B does have a zero on (a, b), it must take a positive and a negative value somewhere on the interval. Otherwise, 0 would be a local maximum or minimum, which by Proposition 2.8 cannot happen. Conversely, having a positive and a negative value on the interval guarantees the existence of zero. Since having a positive and a negative value is a Σ 0 Note that by Proposition 3.5, B cannot have a zero at a nor b, so This theorem yields several useful corollaries.
is computable uniformly in a code for F . To get the lower semi-computability of P{Z B ∩ U = ∅} when U is an effectively open set, it suffices to observe that where U[t] is the approximation of U at stage t, which is a finite union of rational intervals.
Finally, we show that Z B has no isolated point for B Martin-Löf random.

Proposition 3.9
For B Martin-Löf random, Z B has no isolated point.
Proof Consider τ q = inf{t ≥ q : B(t) = 0}, the first zero after some q ∈ Q. By closure of Z B , the infimum is a minimum. Moreover, τ q is layerwise computable in B by Corollary 3.7 and is an almost surely finite stopping time. Thus by the constructive strong Markov property, τ q is not an isolated zero from the right.
Now, consider zeros that are not of the form τ q . Call some such zero t 0 . To see it is not isolated from the left, consider a sequence of rationals q n ↑ t 0 . By assumption on t 0 , for all n there is some τ qn ∈ (q n , t 0 ), so t 0 is not an isolated zero from the left.

Effective version of Kahane's Theorem
Next, we prove an effective version of the following theorem of Kahane's, which we will need in the next section. √ c B(ct) is also Martin-Löf random and satisfies (i). Thus we only need to prove (ii). For this we will use the classical version of theorem (Kahane's) theorem, together with Blumenthal's 0-1 law and some recent results of algorithmic randomness. Recall that Blumenthal's 0-1 law states that any event which only depends on a infinitesimal time interval on the right of the origin (formally, any event in the σ -algebra s>0 σ{B(t) : 0 ≤ t ≤ s}) has probability either zero or one (see [20,Theorem 2.7]). 1 -classes, it follows that the set U is Σ 0 1 , as wanted. We can now apply the effective ergodic theorem proven in [2,8]: since U has measure less than 1 (by Kahane's theorem) and is a Σ 0 1 set, there are infinitely many n such that S n (B) / ∈ U (in fact, the set of such n's is a subset of N of positive density), i.e., such that

The effective dimension of zeros
Effective Hausdorff dimension is a modification of Hausdorff dimension for the computability setting. Intuitively, effective Hausdorff dimension describes how "computably locatable" a point or set is in addition to its size. For example, an algorithmically random point in R n has effective Hausdorff dimension n because it can't be computably located any more precisely than a small computable ball, which has Hausdorff dimension n.
There are many equivalent definitions of effective Hausdorff dimension, but we will use the following definition of Mayordomo [19]. See the book by Downey and Hirschfeldt [4], or papers by Lutz [18] and Reimann [23,25] for more details. This definition can be extended to real numbers by identifying them with their binary representation.
In this section, we will try to characterize the effective dimension of the zeroes of Martin-Löf random paths. This can be broken down in two questions: (1) Given a Martin-Löf random B, what is the set {cdim(x) | x > 0 and x ∈ Z B }?
(2) Given a real x, can we give a necessary or sufficient condition in terms of the effective dimension of x for the existence of some Martin-Löf random path which has a zero at x?
As to the first question, Kjos-Hanssen and Nerode [16] have showed that with probability 1 over B, {cdim(x) | x > 0 and x ∈ Z B } is dense in [1/2, 1] 1 . We make this more precise by showing that for every Martin-Löf random path B (not just almost all paths) {cdim(x) | x > 0 and x ∈ Z B } is contained in [1/2, 1] and contains all the computable reals > 1/2 of this interval.
We will answer the second question by proving that having effective dimension at least 1/2 is necessary, while having effective strictly greater than 1/2 is sufficient (but not having dimension 1/2).

The dimension spectrum of Z B
The next theorem is a direct consequence of the effective version of Kahane's theorem. Proof Let B be such a path and α such a real. Consider the Bernoulli measure µ p (i.e., measure where each bit has probability p of being a zero, independently of all other bits) such that p < 1/2 and −p log p − (1 − p) log(1 − p) = α. Since α is computable, so is p (and hence µ p ), because the function is computable and increasing on [0, 1/2]. Let E 1 = {0} and E 2 be the complement of the first level of the universal Martin-Löf test for µ p (it is a Π 0 1 class since µ p is computable). It is well-known that every set of positive µ p -measure has Hausdorff dimension ≥ α, and moreover that every µ p random real has constructive Hausdorff dimension α (see for example Reimann [23]). Applying Theorem 3.11, there exists some c such that Multiplying by 2 c just adds c zeros in the binary expansion of x, thus 2 c x has the same constructive dimension as x, which is α.

Question 1
The previous theorem could be strengthened with some additional effort to 0 ′ -computable α. However, we conjecture that a stronger result is true, namely that for every Martin-Löf random B, it holds that We do not know how to show this and leave it as an open question.

Being a zero of an Martin-Löf random path
We now address the second of the two above questions: what properties (in terms of effective dimension or Kolmogorov complexity) characterize the reals that belong to Z B for some Martin-Löf random B? To do so, we largely borrow from the work of Kjos-Hanssen [15], but with a number of necessary adaptations to Brownian motion (the paper [15] studies a different stochastic process, namely random closed sets, a particular type of percolation limit sets). Proposition 3.1 gives us a precise expression for the probability of a Brownian motion B to have a zero in a given interval. The key step needed to adapt Kjos-Hanssen's techniques is to estimate the probability for B to have a zero in each of two intervals of the same length. Proposition 4.3 Let 0 < a < b < 1 and ε > 0. Suppose that the intervals [a, a + ε] and [b, b + ε] are disjoint. Let δ be the distance between them (i.e., δ = b − a − ε). Let A 1 be the event "B(s) = 0 for some s 1 ∈ [a, a + ε]" and A 2 be "B(s) = 0 for some s 2 ∈ [b, b + ε]". Then Proof In this proof, we make use of the following notation: given an event A, A ↑τ the unique (by assumption on A) event such that t → B(t + s) ∈ A ↑τ if and only if t → B(t) ∈ A. Now, let A 1 and A 2 be the above events, and let us write The term P 0 (A 1 ) is, by Proposition 3.1, equal to O( ε a ). It remains to evaluate the term P(A 2 | A 1 ). The event A 2 only depends on the values of B on the interval where f is the density function of B(a + ε) conditioned by A 1 . By shift invariance of the Wiener measure, we observe that in this expression, the term P z (A ↑(a+ε) 2 ) is equal to P z (B has a zero in [δ, δ + ε]). This is, in turn, always bounded by P 0 (B has a zero in [δ, δ + ε]), by Proposition 3.1. Thus We have thus established the desired result.

A necessary and a sufficient condition
Our next theorem gives a necessary condition for a point to be a zero of some Martin-Löf random path. Proof Suppose that for a given B, we have B(a) = 0 for some a such that cdim(a) < 1/2. We will show that B is not Martin-Löf random.
Let 1/2 < ρ < cdim(a). Take also some rational b such that 0 < b < a. By definition of constructive dimension, for all n, there exists a prefix σ of a such that K(σ) ≤ ρ|σ| − n. For all strings σ such that 0.σ > b, let I σ = [0.σ, 0.σ + 2 −|σ| ] and the event E σ : B has a positive and a negative value in I σ The event E σ is a Σ 0 1 subset of C[0, 1], uniformly in σ the probability of E σ is O(2 −|σ|/2 ) by Proposition 3.1 (the multiplicative constant depending on b). Define By assumption, B belongs to almost all U n . However, we have Thus the U n form a Martin-Löf test, which shows that B is not Martin-Löf random.
We now prove an (almost) counterpart of Theorem 4.4: The proof is much more difficult and involves the notion of α-energy. Given a measure µ on R and α ≥ 0, the α-energy of µ is the quantity This quantity might be finite or infinite, depending on the value of α. We will need the following two lemmas.

Lemma 4.6
Let β > α ≥ 0. If µ is a measure satisfying such that µ(A) ≤ c · |A| β for every interval A (or equivalently, for every dyadic interval) and for some constant c, then µ has finite α-energy.
Lemma 4.7 Let β > 1/2 and let µ be a finite Borel measure on [0,1] such that for every dyadic interval I , µ(I) ≤ c · |I| β for some fixed constant c (and thus by the previous lemma µ has finite 1/2-energy). Then there exists a constant c ′ > 0 such that the following holds: for any set A ⊆ [1/2, 1] which is a countable union of closed dyadic intervals Proof It suffices to prove this theorem for a finite number of intervals, and up to splitting them if necessary we can assume that they all have the same length 2 −n for some n. Let I 1 , ..., I k be those intervals. Define for all k the random variable X k by To do so, we will use the Chebychev-Cantelli inequality Let us evaluate separately E(Y) and E(Y 2 ). We have for some constant c 1 = 0, the second inequality coming from Proposition 3.1.
Let us now turn to E(Y 2 ), which we need to bound by a constant. We have To evaluate this sum, we decompose it into three parts: The first part is an easy computation. For all i, (for the third equality, we use the fact that µ(I i ) ≤ |I i | β , and for the fifth one the fact that β > 1/2). Thus For the second part, we use a rough estimate: first notice that and for the second part only, we will use the trivial upper bound: Combining this with µ(I j ) ≤ 2 −βn , we get: Moreover, each interval I i has at most two adjacent intervals I j . Thus, Finally, for the third part, we will use the fact that the 1/2-energy of µ is finite. Let us, for a pair of nonadjacent intervals I i , I j with max(I i ) < min(I j ), denote by g(i, j) the length of the gap between the two, i.e., g(i, j) = min(I j ) − max(I i ). We have (4) 1≤i<j≤k (note that we use the fact that I i and I j are contained in [1/2, 1], hence min(I i ) is bounded away from 0). Thus, 1≤i<j≤k Note that, since I i and I j are non-adjacent dyadic intervals of length 2 −n , we have g(i, j) ≥ 2 −n . Therefore, for two reals x, y, if x ∈ I i and y ∈ I j , then |y − x| ≤ 3g(i, j). By this observation, we have 1≤i<j≤k (the last inequality comes from the fact that the 1/2-energy of µ is finite by Lemma 4.6).
We have thus established that E(Y 2 ) = O(1), which completes the proof.
Let KM denote the 'a priori' Kolmogorov complexity function (see [4,Section 6.3.2]). Recall that KM(σ) = K(σ) + O(log |σ|), thus in particular K can be replaced by KM in the definition of effective dimension. The reason we need KM instead of K is the following result of Reimann [24,Theorem 14], which we will apply in the proof of Proof of Theorem 4.5 Let z be of dimension α > 1/2. Let β be a rational such that 1/2 < β < α. Then for almost all n, KM(z ↾ n) ≥ βn. By Reimann's theorem, let µ be a measure such that µ(A) = O(|A| β ) for all intervals A, and such that z is Martin-Löf random for the measure µ.
For all n, let K n be the complement of the n-th level of the universal Martin-Löf test over (C[0, 1], P) and consider the set We claim that U n is Σ 0 1 uniformly in n, and µ(U n ) = O(2 −n/2 ). To see that it is Σ 0 1 suppose that x ∈ U n , i.e., B(x) = 0 for all B ∈ K n . The set K n being compact (see Section 1), the value of |B(x)| for B ∈ K n reaches a positive minimum. Thus there is a rational a such that B(x) > a for all B ∈ K n . By uniform continuity of the members of K n (ensured by Proposition 2.3), there is a rational closed interval I containing x such that |B(t)| > a/2 for all t ∈ I and B ∈ K n . Thus U n is the union of intervals (s 1 , s 2 ) such that min{B(t) : t ∈ [s 1 , s 2 ]} > b for some rational b and all B ∈ K n . Moreover, the condition "min{B(t) : t ∈ [s 1 , s 2 ]} > b for all B ∈ K n " is Σ 0 1 , because the function B → min{B(t) : t ∈ [s 1 , s 2 ]} is layerwise computable (thus uniformly computable on K n ), and the minimum of a computable function on an effectively compact set is lower semi-computable uniformly in a code for that set. This shows that U n is Σ 0 1 . To evaluate µ(U n ), let us first observe that by definition of U n , Applying Lemma 4.7, it follows that µ(U n ) = O(2 −n/2 ), as wanted. Since z is Martin-Löf random with respect to µ, it cannot be in all sets U n , and thus it must be the zero of some Martin-Löf random path.

The case of points of effective dimension 1/2
In the previous section we showed that no point of effective dimension less than 1/2 can be the zero of a ML random path, and that every point of dimension greater than 1/2 is necessarily a zero of some ML random path. This leaves open the question of what happens at effective dimension exactly 1/2. While we do not provide a full answer, we show that among points of effective dimension 1/2, some are zeros of some ML random path, and some are not.
The next theorem, which strengthens Theorem 4.4, gives a necessary condition for a point to be a zero of some ML random path. Proof The proof is an adaptation of that of Theorem 4.4. First take a rational a such that 0 < a. We shall prove the lemma for all x > a, which will be enough since a is arbitrary. For each string σ consider, like in Theorem 4.4, the interval I σ = [0.σ, 0.σ + 2 −|σ| ] and the event E σ : B has a positive and a negative value in I σ Now, consider the function t defined on C[0, 1] by The event E σ is a Σ 0 1 subset of C[0, 1], uniformly in σ . Thus the function t is lower semi-computable. Moreover, the probability of E σ is O(2 −|σ|/2 ) by Proposition 3.1 (the multiplicative constant depending on a). Thus the integral of t is bounded, and therefore t is an integrable test (see [9]). Let now B be a Martin-Löf random path and suppose B(x) = 0 for some x > a. Then for almost all n, a < 0.(x ↾ n). Moreover, for every n, B having a zero in I x↾n , it must in fact have a positive and a negative value on that interval (by Proposition 2.8). Thus, by definition of t t(B) + O(1) ≥ n 2 −K(x↾n)+n/2 (the O(1) accounts for the finitely many terms such that a ≥ 0.(x ↾ n)). But since B is Martin-Löf random and t is a integrable test, we have t(B) < ∞, which proves our result.
This theorem shows in particular that if x is the zero of some Martin-Löf random path, then K(x ↾ n) − n/2 → +∞.
We now give a sufficient condition which actually is very close to our necessary condition. Proposition 4.9 Let f : N → N be a function such that n 2 −f (n) < ∞. Let µ be a Borel measure on [0,1] such that for every interval A of length ≤ 2 −n , µ(A) ≤ 2 −αn−f (n) . Then µ has finite α-energy.
Proof For now, let us fix some x. Define for all n the interval I n to be [ Theorem 4.10 Let f : N → N be a nondecreasing computable function such that f (n + 1) ≤ f (n) + 1 for all n, and such that n 2 −f (n) < ∞. Let x be a real such that KM(x ↾ n) ≥ n/2 + f (n) + O (1). Then x is the zero of some Martin-Löf random path.
Proof Let f be such a function and x such a real. By a result of Reimann [24,Theorem 14], there exists a measure µ such that µ(A) ≤ 2 −n/2−f (n)+O(1) for all intervals of length ≤ 2 −n such that x is Martin-Löf random with respect to µ. By Proposition 4.9, µ has finite 1/2-energy. The rest of the proof is identical to the proof of Theorem 4.5. Proof Fix a 'large enough' integer m, which we will implicitly define during the construction. We will build the sequence x by blocks of length m. For m large enough, the empty string has complexity less than 3 log m. Suppose we have already constructed a prefix σ of x such that |K(σ ↾ n) − αn + f (n)| ≤ 3 log m for all n ≤ |σ| multiple of m. Pick a string τ of length n such that We can finally prove the promised theorem. Proof For B(t) a standard planar Brownian motion, the probability that B(t) hits an ε-ball around a point (x, y) = (0, 0), for ε < |x 2 + y 2 | is equivalent to the probability that a planar Brownian motion started at radius |x 2 +y 2 | = R hits an ε-ball around zero, by radial symmetry of the planar Brownian motion. The radial part of d-dimensional Brownian motion is the Bessel process of order ν where d = 2ν + 2, and is well understood. In the planar case we are concerned with the Bessel process of order zero.
Let τ R,ε be the first hitting time of the Bessel process of order zero started at R, hitting to ε. Using a result of Haman and Matsumoto [10], we know that P(τ R,ε ≤ 1) is equal to where L 0,R/ε (x) = I 0 (Rx/ε)K 0 (x) − I 0 (x)K 0 (Rx/ε) (K 0 (x)) 2 + π 2 (I 0 (x)) 2 and These functions are computable, because all the component pieces -cosine, square root, exponentiation, multiplication, and division -are computable, and the integral of a computable function is computable. See the book by Weihrauch [28] for more details. Moreover, this integral goes to zero as ε goes to zero, which is more easily seen using a classical result of Spitzer [27]: As the right hand side is a constant, and log( 1 ε ) → ∞, we know that Pr(τ R,ε ≤ 1) → 0. Thus we have a Schnorr test relative to the point (x, y), so a Martin-Löf random path B(t) will only pass through points (x, y) such that the path B (or a code for the path) is not random relative to (x, y), before time 1. The argument is the same for any finite time, not just time 1, so the statement of the theorem holds.

Corollary 5.4
For B a Martin-Löf random planar path, the graph of B has zero area.
Proof Only Lebesgue measure zero many points derandomize any particular real, so any Martin-Löf random path hits only Lebesgue measure zero many points.

Corollary 5.5
For any point (x, y) = (0, 0), only measure zero many Brownian paths hit (x, y) (Almost surely, Brownian motion does not hit a given point).
Proof A real derandomizes only Lebesgue measure zero many reals. Corollary 5. 6 At any time t > 0, for B(t) a standard planar Martin-Löf random Brownian motion, B does not pass through any computable point.
Proof A Martin-Löf random path is always random relative to a computable point.

Dirichlet Problem
The Dirichlet problem asks the following question: given a domain (i.e., connected open set) U ⊆ R n and a function φ defined everywhere on the boundary ∂U of U , is there a unique, continuous function u such that u is harmonic on the interior of U and u = φ on ∂U ? The Dirichlet problem arises whenever one considers notions of potential -for example, the problem may be thought of as finding the temperature of the interior of a heat-conducting region for which the temperature on the boundary is known, or alternatively, finding the electric potential on the interior of a region for which the charge on the boundary is known.
These physical interpretations of the problem make it clear that there should be a unique solution, and indeed, many ways of finding this unique solution are known. One method of solving the Dirichlet problem which arises from an intuition of heat diffusion in a heat-conducting substance uses the mathematical model of Brownian motion [14]. Theorem 5.8 (Kakutani) Suppose U ⊂ R d is a bounded domain such that every boundary point satisfies the Poincaré cone condition, and suppose φ is a continuous function on ∂U . Let τ (∂U) = inf{t > 0 : B(t) ∈ ∂U}, which is an almost surely finite stopping time. Then the function u : U → R given by is the unique continuous function harmonic on U with u(x) = φ(x) for all x ∈ ∂U .
By relativizing Corollary 3.7, we can use the layerwise computability of the hitting time of Martin-Löf random Brownian motion to a computable line to show that the solution to the Dirichlet problem is computable in the planar case when the boundary is computable and the condition on the boundary is both computable. Of course, we first need to specify what we mean by that. For example, even assuming that the boundary is a curve -which it might not be, think for example of an open disk with a smaller disk inside removed -there are several notion of computable curve we can take, see [26]. We will take a very general notion of computability (in the case of curve, it is the most general studied in [26]): We assume that ∂U is computable in the sense that there exists a computable sequence (C n ) n∈N such that for all n, C n is a finite set of squares in the 2-dimensional grid 2 −n Z × 2 −n Z whose union is connected, contains ∂U , and every point inside this union of squares is at distance at most 2 −n+2 of the boundary.
To formalise the fact that the condition φ is computable, we assume that there is a uniformly computable family (φ n ) n∈N , where each φ n is a function which assigns a real value to each square c, in such a way that this value is within ε(n) of the values of φ on ∂U ∩ c, and the values of two adjacent squares are within ε(n) of each other, ε being a computable function which tends to 0 computably in n.
Theorem 5.9 (Computable Dirichlet Problem) Let U be a bounded domain whose boundary ∂U satisfies the Poincaré cone condition and φ a condition on the boundary. Assume ∂U and φ are computable in the sense described above. Then the solution to the Dirichlet problem -the unique, continuous function u : U → R harmonic on U such that u(x) = φ(x) for all x ∈ ∂U -is computable.
The rest of the section will be devoted to proving this result. The plan is to prove the theorem in two steps: (i) First, we prove it in the particular case where ∂U 'squared', i.e., is made of a finite number of vertical and horizontal (i.e, parallel to the x-axis or y-axis) segments with rational endpoints, the list being given explicitly. As we will see, in this case, we can apply the results of the previous sections to compute the first time a Martin-Löf random path hits the boundary.
(ii) Then we extend it to all computable functions γ by approximation. That is, we approximate ∂U by a squared boundary with arbitrary precision and apply Step 1.
Let us first see how to apply the results of the previous section to planar Brownian motion.
Lemma 5. 10 For B(t) a Martin-Löf random planar Brownian motion started at a computable point, seeing when B(t) hits the line parallel to either the x − axis or y − axis, if the line is computable, is layerwise decidable in B(t).
Proof Without loss of generality, say we are looking for the first time X(t) = α, for B(t) = (X(t), Y(t)), α computable, B(t) started at q = (q x , q y ) ∈ Q. This is equivalent to looking for the first time X ′ (t) = X(t) − q x , a standard 1-dimensional Brownian motion, crosses q x − α, which has exactly the same proof as Corollary 3.7 above.

Lemma 5.11
For B(t) a Martin-Löf random planar Brownian motion started at a computable point, the first time B(t) passes through a vertical or horizontal line segment with computable endpoints is layerwise computable in B(t).
Proof To layerwise computably find the first crossing time through the line segment, we run the following algorithm. Let r 0 = 0 be the first time considered. The first crossing of B(t) through the line y = α after r 0 is layerwise computable in B, call this time t 1 . If t 1 falls within the line segment, we are done.
Assuming t 1 crosses the line away from the line segment, we will call the distance from the line segment ε 1 > 0. In order for B(t) = (X(t), Y(t)) to hit the line segment after t 1 , X(t) must change by more than ε 1 . By Proposition 2.4, we can find an h 1 , layerwise in X(t), such that this does not occur in (t 1 , t 1 + h 1 ). We choose r 1 ∈ (t 1 + h 1 /2, t 1 + h 1 ) to be any rational time, and then continue the algorithm by finding the next crossing time t 2 > t 1 through the line y = α.
Because the line segment has computable vertices, B(t) will not cross through the vertex of the line by Corollary 5.6. This tells us that before hitting the line segment, there is a closest value ε L > 0 away from the vertex of line segment such that B(t) crosses no closer than ε L to the vertex. As above, this ε L is associated with a time h L within which X(t) will not cross the line segment. As each ε n ≥ ε L , each h n ≥ h L > 0, so we are incrementing our time steps by at least h L /2 at each stage. Therefore we are taking time steps small enough so that we do not miss the first crossing time, but time steps which are always bounded away from 0, so we must eventually find the first crossing time of B(t) through the line segment.
We can now prove our theorem in the restricted case of an explicitly given squared boundary.

Proposition 5.12
If U is a planar region such that ∂U is an explicitly given squared boundary and φ is a computable function on ∂U , then the solution to the Dirichlet problem is computable for U .
Proof By Lemma 5.11 the first hitting times on each line segment are computable uniformly in starting point x and layerwise in B, and δU is composed of finitely many line segments with computable endpoints, so the first hitting time τ B (∂U) to the boundary is layerwise computable in B, uniformly in the starting point. Since φ is computable, φ(τ B (∂U)) is computable uniformly in starting point x and layerwise in B.
By Theorem 1.2, the expression , for x ∈ U is computable, uniformly in x, and by Kakutani's classical result 5.8, this is the solution to the Dirichlet problem.
Now, all we need to do is extend this last proposition to the general case.
Proof of Theorem 5.9 Let u be the solution of Dirichlet's problem (we don't know yet it is computable, but we know it exists from the classical theorem) for condition φ on ∂U . Given a point x ∈ U , we first compute, for all n, an approximation C n of ∂U which are squares of 2 −n Z × 2 −n Z. Compute the largest set Q of squares of 2 −n Z × 2 −n Z which (a) contains the square c which contains x, (b) does not contain any square in C n and (c) is 4-connected, i.e., every square of Q n should share an edge with another member of Q n (unless there is only one square). Call V n the interior of the union of the squares in Q n . Observe that V n must be contained in U , since it contains a point in U , is connected, and is disjoint from ∂U (if it were not contained in U , then V \Ū and U ∩ V would be two non-empty open sets partitioning V , contradicting its connectedness). Observe also that each segment of ∂V n must be the edge of a square c ∈ C n , so we can compute a condition ψ on ∂V n which is equal to φ n (c) on the edge of φ n (c) (up to smoothing it out around corners to ensure continuity).
Claim. For every point z ∈ ∂V n , |ψ(z) − u(z)| < O(ε(n) + 2 −n ). Indeed, let c be the member of C n which has z on its edge. Every point of c is at distance at most 2 −n+2 of the boundary, so there is a square c ′ at distance O(2 −n ) of c ′ which contains some point z ′ ∈ ∂U , and the value of φ n (c ′ ) is within ε(n) of the value of u(z ′ ). Thus, |ψ(z) − u(z)| ≤ |ψ(z) − φ n (c ′ )| + |φ n (c ′ ) − u(z ′ )| + |u(z ′ ) − u(z)| ≤ O(ε(n)) + ε(n) + O(2 −n ) (for the last term, we use the fact that |z ′ − z| = O(2 −n ) and the fact that u is harmonic, hence Lipschitz), the constants in the O-notations not depending on n, z ′ , z. To be precise, we need to add the possible error induced by the 'smoothing around corners', but it itself is bounded by O(ε(n) + 2 −n ) since the φ n -values of two adjacent segments of ∂V n are O(ε(n) + 2 −n )-close to each other. Thus, applying the restricted case of our theorem (Proposition 5.12) to ψ and V n , we can compute the value v n (x) of the solution to Dirichlet's problem on ∂V n with condition ψ . But since |ψ − u| = |v n − u| is bounded by O(ε(n) + 2 −n ) on ∂V n , this implies that |v n − u| is also bounded by O(ε(n)+2 −n ) on all of V n (by the maximum principle, since v n −u is harmonic). Thus, we have effectively obtained an approximation of u(x) with precision O(2 −n + ε(n)) uniformly in n and x, which means that u is computable.