Exit Times for a Discrete Markov Additive Process

In this paper we consider (upward skip-free) discrete-time and discrete-space Markov additive chains (MACs) and develop the theory for the so-called $\tilde{W}$ and $\tilde{Z}$ scale matrices. which are shown to play a vital role in the determination of a number of exit problems and related fluctuation identities. The theory developed in this fully discrete setup follows similar lines of reasoning as the analogous theory for Markov additive processes in continuous-time and is exploited to obtain the probabilistic construction of the scale matrices, identify the form of the generating function and produce a simple recursion relation for $\tilde{W}$, as well as its connection with the so-called occupation mass formula. In addition to the standard one and two-sided exit problems (upwards and downwards), we also derive distributional characteristics for a number of quantities related to the one and two-sided `reflected' processes.


Introduction
Exit problems for stochastic processes is a classic topic in applied probability and has received a great deal of attention within the literature.In the continuous setting (time and space), exit problems for so-called upward skip-free processes, known in the literature as 'spectrally negative Lévy processes', have been extensively considered in [5] (Chapter VII), [18] (Chapter 8) and references therein, by means of fluctuation theory where semi-explicit expressions are derived in terms of the so-called 'scale functions'.On the other hand, in the fully discrete setting exit problems for general discrete-time random walks are excellently treated in [10] and [13], among others, by means of probabilistic arguments and include, as particular cases, the corresponding upward skip-free random walks.That is, a random walk for which downward jumps are unrestricted but upward jumps are constrained to a magnitude of at most one, emulating the upward 'drift' in continuous-time.More recently, [3] implement the ideas underlying the exit problems for continuous spectrally negative Lévy processes for their discrete random walk counterparts and derive exit problems and other fluctuation identities in terms of analogous 'discrete scale functions'.
A natural generalisation of the above processes are the broad family of Markov Additive Processes (MAPs), which incorporate an externally influencing Markov environment, providing greater flexibility to the characteristics of the underlying process in terms of its claim frequency and severity distributions, see [1] (Chapter XI).Within this generalised framework, the existence of multidimensional scale functions, known as 'scale matrices', was first discussed in [19] and were used to derive fluctuation identities and first passage results for continuous-time MAPs.[15] extended the initial findings of [19] by providing the probabilistic construction of the scale matrices, identifying their transforms and considering an extensive study of exit problems including one-sided and two-sided exits, as well as exits for reflected processes via implementation of the occupation density formula.Further studies on MAPs and their exit/passage times can be found in [8], [4], [9], among others.More recently, [17], derive and compare results for continuous-time MAPs with lattice (discrete-space) and non-lattice support.It is worth noting here that the authors in this work do discuss some of the corresponding results for the fully-discrete (time and space) MAP model considered in this paper, however, only a limited number of results are stated and a variety of important steps and proofs were omitted.
This paper bridges the gap between the aforementioned works and provides a theoretical framework for fully discrete, upward skip-free MAPs, in terms of 'discrete scale matrices', spelling out the differences in results, methodologies and necessary adjustments for deriving fluctuation identities between discrete and continuous MAPs.In particular, we derive results for the first passage theory, including one and two-sided exit problems as well as the under(over)-shoots upon exit via the associated 'reflected' process.The motivation for deriving such a framework comes from the discrete set up having known advantages over the continuous-time models.For example, it is known that the Wiener-Hopf factorisation can be replaced by a simple Laurent series (see [3]).Moreover, due to the equivalence between a discrete MAP and a Markov-modulated random walk, this paper provides a more flexible random walk model and enriches the numerous applications of random walk theory across a variety of disciplines.
The paper is organised as follows: In Section 2 we define the MAP in discrete time and space and derive the so-called occupation mass matrix formula, from which we obtain some useful identities to be used in the following sections.In Section 3, we introduce some fundamental matrices associated to the discrete MAP, identify the first of two discrete scale matrices and derive matrix expressions for the one and two-sided upward exit problem.In Section 4, we derive results for the corresponding one and two-sided reflected processes, including the over-shoot and under-shoot upon exit which are then used in Section 5 to derive expressions for the one and two-sided downward exit problems of the original (nonreflected) discrete MAP.

Preliminaries
A fully discrete (time and space) MAP, which we will call a Markov Additive Chain (MAC), is defined as a bivariate discrete-time Markov chain (X, J) = {(X n , J n )} n 0 , on the product space Z × E, where X n ∈ Z describes the level of the underlying process, whilst J n ∈ E = {1, 2, . . ., N } describes the phase of some external Markov chain (which affects the dynamics of X n ) having transition probability matrix P, such that for i, j ∈ E, (P) ij = P (J 1 = j|J 0 = i).It is assumed throughout this work, that the Markov chain {J n } n 0 is ergodic such that its stationary distribution π ⊤ = (π 1 , . . ., π N ) exists and is unique.The defining property of the MAC is the conditional independence and stationarity of law governing X n , given J n .That is, given that {J T = i} for some fixed T ∈ N, the Markov chain {(X T +n − X T , J T +n )} is independent of F T (the natural filtration to which the bivariate process (X, J) is adapted) and {(X T +n − X T , J T +n )} d = {(X n − X 0 , J n )}, given {J 0 = i} for any phase state i ∈ E. This is known as the Markov additive property, a consequence of which is that the level process {X n } n 0 is translation invariant on the lattice.
Intuitively, the MAC is simply a Markov-modulated random walk where {X n } n 0 evolves according to the random walk a sequence of conditionally i.i.d.random variables with common, conditional distribution q ij (y) = P(Y 1 = y|J 1 = j, J 0 = i), and thus, probability mass matrix Q(y), with i,j-th element Q(y) ij = q ij (y).As such, and due to the invariance property, the transition probability matrix of (X, J) has a block-like structure with blocks A m which represent the (one-step) transition matrix for an increase of m levels in {X n } n 0 whilst capturing the phase transitions of {J n } n 0 , such that where • denotes entry-wise products (Hadamard multiplication).In the remainder of this paper, we assume that X = {X n } n≥0 may only increase by at most one level per unit time whilst experiencing downward jumps of arbitrary size.That is, for all i, j ∈ E, we have q ij (m) = P(Y 1 = m|J 1 = j, J 0 = i) 0 for m 1 and q ij (m) = 0 otherwise, which leads to Q(m) = 0 and thus A m = 0 for m = 2, 3, . . ... In this sense, we say that X possesses an 'upward skip-free' property, an advantage of which is that the value of X is known at stopping time corresponding to 'upward' crossing/hitting of a given level (see below).This corresponds to the discrete analogue of a 'spectrally negative' MAP in the continuous setting, which have important applications to workload and surplus processes in queuing and risk theory, respectively (see [1] and [2] for more details).

MAC Matrix Generator
It has already been noted that the dynamics of the level process (X) within the MAC depends on the phase transitions of the external Markov chain (J).As such, the majority of quantities and results presented in this paper depend on the initial and final states of {J n } n 0 and thus, are given in matrix form.With this in mind, let us define the expectation matrix operator E x (• ; J n ) which denotes an N × N matrix with i, j-th element represents the indicator function, and corresponding probability matrix and thus, we can define the probability generating matrix (p.g.m.) of the process {X n } n 0 with initial level X 0 = 0, for at least |z| 1 and z = 0, by E z −Xn ; J n , which satisfies and for z = 1, we have Remark 1.Note that since the matrices A −m are probability transition matrices, such that A −m 0 (non-negative), it follows that for z > 0, the matrix F(z) is also non-negative.Hence, by the Perron-Frobenius theorem, F(z) has a (simple) eigenvalue, denoted κ(z), which is greater than or equal in absolute value than all other eigenvalues with corresponding left and right (column) eigenvectors, denoted v(z) and h(z), respectively, such that v(z) ⊤ F(z) = κ(z) v(z) ⊤ and F(z) h(z) = κ(z) h(z).Moreover, since F(1) = P is a stochastic matrix, using standard facts from matrix analysis (see [7]) we have κ(1) = 1 and it can be shown that κ ′ (1) determines the asymptotic drift of the level process {X n } n 0 (see Section 1.3 in [22] and [11]), given by Within the theory of continuous-time Lévy processes, it is often desirable to analyse the process prior to some independent exponential 'killing time' as this can emulate the role of Laplace transforms or generating functions within the calculations (see [18]).For a MAP, this exponential killing time can alternatively be incorporated via an enlargement to the state space of the Markov chain with the addition of an 'absorbing' (killing) state and analysing the process prior to absorption (see [15] for details).
In a similar way, let us enlarge the state space E to E ∪ { †}, where † denotes an absorbing state, often called the cemetery state, and we set X = ∂ whenever J = †.Moreover, let us assume that the (one-step) 'absorption' probability is the same from all states and denoted by 1 − v = P (J 1 = †|J 0 = i), for all i ∈ E, such that the corresponding 'non-absorption' probability (survival) is given by v ∈ (0, 1].Now, due to the addition of this cemetery state, it is clear that the probability transition matrix for transitions between the 'transient' (when v < 1) states of E is dependent on v. Let us define this by P(v) ≡ vP, where P denotes the stochastic probability transition matrix defined in Section 2, in the absence of an absorbing state or 'killing' (v = 1).Hence, it follows that P(v) e = vP e = v e and thus, for v < 1, P(v) is sub-stochastic and its Perron-Frobenius eigenvalue is less than 1 (see [7]).Finally, it follows that the absorption or 'killing' time of the Markov chain, denoted g v = inf{n > 0 : J n = †}, is geometrically distributed with parameter v ∈ (0, 1] and we have where with F(z) denoting the matrix generator of the MAC in the absence of killing, as defined as in Eq. (2.2).The connection between the killed process and transforms/generating functions of the non-killed process is evident when we note that Eq. (2.3) is equivalent to E v n z −Xn ; J n for a 'non-killed' MAC.Further advantages of working with the killed process are discussed in more details in later sections.
Throughout the remainder of this paper, we generally suppress the explicit notation that absorption has not yet occurred but point out that it is assumed implicitly.As such, the results derived in the following are, in fact, much more general than they appear, with only a handful of these generalisations being stated explicitly.

Occupation Times
It is well known that occupation times and their densities play an important role within the theory of Lévy processes and their fluctuations.In a continuous environment, the definition of the occupation density/time of a process at a given level has to be treated with some care and detail (see [5], [15]) however, in the fully discrete model considered in this paper, the mathematical definition is intuitive.Let us define by L(x, j, n), the occupation mass denoting the number of periods the process {(X n , J n )} n≥0 is in state (x, j) ∈ Z × E, up to and including time n 0, such that Then, for some measurable non-negative function f , we have the so-called discrete occupation mass formula From the above definition, it is clear that L(x, j, n) is a non-decreasing (monotone) process in n 0, which is adapted to the natural filtration F n .Therefore, if we further define the N -dimensional square occupation mass matrix, denoted L(x, n), with i, j-th element given by L(x, n) ij = E L x, j, n J 0 = i .Then, by application of the strong Markov property, analogously to Proposition 8 in [15], we have the following proposition.
denote the first 'hitting' time of the level x ∈ Z.Then, for the occupation mass matrix where (P (τ x < ∞, J τx )) ij = P (τ x < ∞, J τx = j|J 0 = i) and L := L(0, ∞) is the occupation mass matrix at the level 0 over an infinite-time horizon, which has strictly positive entries.

Remark 2. Let us point out some of the advantages of working with the killed process at this point:
(i) If we include the implicit killing in the calculations explicitly, then for v ∈ (0, 1], the probability P (τ x < ∞, J τx ) becomes where in the second equality we have used the fact that P(v) = vP.That is, the probability matrix P(τ < ∞, J τx ) becomes the generator matrix E(v τx ; J τx ), if one imposes 'killing' explicitly.As mentioned above, throughout this work we will keep killing implicit as it greatly simplifies the presentation but highlight that the above idea holds for all results.
(ii) Similarly, by superimposing killing in Proposition 1, we have that The main reason for introducing the theory of occupation times and their associated mass matrices, is due to their relationship with the one step p.g.m., namely F(z).This connection is highlighted in the following auxiliary theorem which provides the foundations for many of the results in the following sections.
where τ x is the first hitting time of the level x ∈ Z.
Proof.First note by the occupation mass formula, that for any j ∈ E, we have Taking expectations in the above equation and considering the limit as n → ∞, yields from which, since z −x is non-negative for z > 0, we can apply the monotone convergence theorem to obtain Equivalently, in matrix form the above expression can be written as where the last equality comes from the result of Proposition 1. Finally, we note that the geometric series on the l.h.s.converges to (I − F(z)) −1 as long as κ(z) < 1 and the result follows using analytic continuation to extend the domain of convergence to all z ∈ (0, 1] such that (I − F(z)) −1 exists.
Remark 3. Note that the result of Theorem 1 holds in the presence of killing (v < 1), since P(v) is sub-stochastic and thus κ v (1) < 1, where κ v (z) is the Perron-Frobenius eigenvalue of F v (z).Hence, by continuity of κ v (z), there exists a small interval around z = 1 for which κ v (z) < 1.In addition, L must have finite entries as under killing the Markov chain is transient and the expected number of visits to any state is finite.

Upward Exit Problems
In this section we discuss and derive results on exit problems for upward skip-free MACs above and below a fixed level or strip.In the first instance, we will utilise the upward skip-free property of the level process, {X n } n 0 , to determine expressions for upward exit times (one and two-sided), then extend the theory to consider downward exit problems.These expressions are given in terms of so-called fundamental and scale matrices associated to the MAC, where the existence of the latter were first discussed in [19] and extend the notion of scale functions associated to Lévy processes (see [18] and [3] for more details).
All the results given in this section are stated from an initial level X 0 = 0 which, due to the invariance property, can be generalised to an arbitrary level, say x 0 ∈ Z, via an appropriate shift.
Let us denote by τ ± x , the first time the level process {X n } n 0 up(down)-crosses the level x ∈ Z, such that We note that in a 'spectrally negative' MAC with upward drift of one per unit time, for x X 0 the random stopping times τ + x (crossing time) and τ x (hitting time) coincide.Moreover, we have that X τ + x = X τx = x.

One-Sided Exit Upward
The key observation for the first passage upwards, is that the stationary and independent increments as well as the skip-free property provide an embedded Markov structure.To see this, recall that X τ + 1 = X τ 1 = 1, which together with the strong Markov and Markov additive properties, imply that the process {J τn } n 0 is a (time-homogeneous) discretetime Markov chain, given X 0 = 0, with some probability transition matrix G, such that for a ≥ 0, with i, j-th element given by ( Remark 4. In the case of no killing, i.e., v = 1 and κ ′ (1) 0 (non-negative drift), the matrix G is a stochastic matrix and sub-stochastic otherwise.
The transition probability matrix G is widely known as the fundamental matrix of the MAC and contains the probabilistic characteristics to determine upward passage times and the corresponding phase state at passage.That is, determining the matrix G provides the probability of hitting any upper level a 0 and the phase of {J n } n 0 at this hitting time.
The matrix G has a long history in the theory of structured stochastic matrices (see for e.g., Lemma 4.2 in [7]) and can be computed by conditioning on the first time period, i.e., Multiplying on the right by G −1 , assuming it exists (see Remark 7), and using the definition of F(z) given in Eq. (2.2), it follows that the fundamental matrix, G, is the right solution of F(•) = I, which is a well known equation established in [21] and further studied in [7], [22], [11] and [12], among others.

Remark 5. Let us discuss a few important observations about the fundamental matrix G and its significance within applied probability:
(i) For the continuous-time (scalar) spectrally negative Lévy process, the fundamental matrix G, corresponds to the of inverse Laplace exponent at zero, namely Φ(0), i.e., the solution to ψ(β) = 0, where ψ(β) denotes the Laplace exponent of the Lévy process (see [18]).
(ii) It follows by definition that E G −Xn ; J n is a martingale.In fact, it is clear that in the matrix setting, there exists another solution (left solution) to F(•) = I, say R, which would also result in the martingale E R −Xn ; J n .It turns out that the matrix R is actually the counterpart of G for the 'time-reversed MAC' and is considered another fundamental matrix.The time-reversed MAP and the corresponding matrix R are considered in [16] for the continuous-time (lattice) case and we direct the reader to this paper for more details.
(iii) Superimposing killing in the above produces the transform of the first passage time, namely E (v τa ; J τa ), such that ) (iv) As discussed in [16], the right solutions of the above equations cannot be determined analytically except in some special cases.However, there exists a number of numerical algorithms which can be employed, e.g., the iterative algorithm [21], logarithmic reduction [20] and the cyclic reduction [6], to name a few.For further details on the variety of algorithms available for solving such equations, see [7] and references therein.

Two-Sided Exit
Within the literature of spectrally negative Lévy processes and their fully discrete counterparts [3], the common approach to solving two-sided exit problems relies on the introduction of a family of functions, W q and Z q , known as the q-scale functions (see [18] for details).
The extension of these auxiliary, one dimensional scale functions to the multidimensional MAP setting was first proposed in [19], where the existence of the corresponding 'scale matrices' was shown and were further investigated in [15] who derived their probabilistic interpretation within the continuous setting.
For v ∈ (0, 1], the discrete W v scale matrix is defined as the mapping W v : N → R N ×N , with W v (0) = 0 (the matrix of zeros), such that where we write W 1 (n) =: W(n) for the 1-scale matrix.The definition of the scale matrix above is only unique up to a multiplicative constant and the presence of the infinite-time occupation matrix, L v , is somewhat arbitrary here but is included in order to obtain the most concise form for the p.g.m. of W v (•), which is derived in Theorem 2 (see also [15]).
In the two-sided exit problem, we are interested in the time of exiting a (fixed) 'strip', [−b, a], consisting of an upper and lower level denoted by a and −b, respectively, such that a > 0 > −b.More formally, we are interested in the events {τ + a < τ − −b } and {τ + a > τ − −b }, which correspond to the upward and downward exits from the strip [−b, a], respectively.In this section, we are concerned with the former (upward exit).The the latter (downward exit) will be discussed in a later section as its derivation depends on alternative methods.
Let us denote by ρ(•), the spectral radius of a matrix.That is, if Λ(A) denotes the spectrum of a matrix A, then ρ (A) = max{|λ i | : λ i ∈ Λ(A)}.

Two-sided exit theory for non-singular A 1
Theorem 2. Assume that A 1 is non-singular.Then, there exists a matrix W : N → R N ×N with W(0) = 0, which is invertible and satisfies where ) where L + (n) := E L (0, τ n ) , denotes the expected number of times the process visits 0 before hitting level n ∈ N + .
Proof.Following the same line of logic as in [15], we note that the events {τ + a < τ − −b } and {τ a < τ −b } are equivalent due to the upward skip-free property of {X n } n 0 .This follows from the fact that in order to drop below −b and then hit a, the process must visit −b on the way.Thus, conditioning on possible events and employing the Markov additive property, we obtain Now, by recalling that P τ a < ∞, J τa = G a , solving the second equation w.r.t.P τ a > τ −b , J τ −b and substituting the resulting equation into the first, yields (3.9) Finally, by multiplying through by − G −(a+b) on the right, we have given that W(•) −1 exists (see Remark 7).Note that the above result is derived in the absence of the occupation mass matrix, L, within the definition of W(n), reinforcing the point that the scale matrix is uniquely defined up to a (matrix) multiplicative constant.
The choice for including L in the definition of W(n), which is only well defined as long as L has finite entries (see Remark 3 for conditions), will become apparent in the following.
To prove Eq. (3.7), let us take the transform of the scale matrix and recall the definition given in Eq. (3.6), to obtain where the first term on the r.h.s.satisfies for all z ∈ (0, γ), where γ := min{|λ i | : For the second term of Eq. (3.10), under the conditions of Theorem 1, we have true as long as G is invertible and this follows from the assumption that the matrix A 1 is non-singular, see also Remark 7 ), the geometric series on the r.h.s.converges and the above equation can be re-written as once we prove a common domain of convergence, i.e., I − F(z) −1 exists for some z ∈ (ρ( G), 1].In fact, for ρ( G) < 1, see Lemma 4 in [8], it can be shown that the zeros of det[I − F(z)] coincide with the eigenvalues of G for z ∈ (0, 1] and thus, the above holds.Now, note that if we multiply Eq. (3.12) from the left by I − z G −1 and from the right by I − F(z), then both sides of the resulting equation are analytic for z ∈ (0, 1].Hence, since the matrices I − z G −1 and I − F(z) are invertible as long as z / ∈ Λ( G) and thus for z ∈ (0, γ), the aforementioned multiplication can be reversed and Eq.(3.12) holds for z ∈ (0, γ) by analytic continuation.The result follows by substituting the above equation, along with Eq. (3.11), into Eq.(3.10) and using analytic continuation to extend the domain from z ∈ (0, γ) to z ∈ (0, 1] such that z / ∈ Λ( G).
To prove Eq. (3.8), we use similar arguments to those used for the result of Proposition 1, to show that for n 0 where L + (n) := E L(0, τ n ) , from which it follows that Multiplying this expression through by G −n (on the left) and recalling the form of W(n) given in Eq. (3.6), the result follows immediately.So far we assume only that ρ( G) < 1, hence by Remark 4 that either v < 1 or that v = 1 and κ ′ (0) > 0. To handle the remaining (limiting) case of v = 1 and κ ′ (0) ≤ 0 we can follow the proof of Theorem 1 in [15].Namely we can use the representation (3.8) of the scale function, take v → 1 and observe that matrices G, L + (n) and F(z) properly converge.
Remark 6.In [16] the authors derive an equivalent result to Theorem 2 for a continuoustime MAP in the lattice and non-lattice case.Although their study focuses purely on the continuous-time case, they do point out the connection for the discrete-time model (Remark 6 in [16]) but do not provide any proof or further details.
Remark 7 (Invertibility of L + (n), G and W(n)).Throughout the proof of the previous theorem and results earlier in this paper, we required invertibility of the fundamental matrix G and the scale matrix W(n).We will now look at under what conditions such invertibility holds: (i) Following similar arguments as in [16], since the level process starts at X 0 = 0, the expected number of visits at 0 before the process reaches level n 0, namely where Π n is a probability matrix with i, j-th element containing the probability of a second visit to level 0 before reaching level n and doing so in phase j, conditioned on the starting point (0, i).Note that Π n is clearly a sub-stochastic, non-negative matrix, which implies ρ Π n < 1 and thus I − Π n is invertible.Hence, L + (n) is also invertible, since from the above expression it follows that

from which it follows that
Although Theorem 2 provides a number of representations for W, in the discrete case the scale matrix also satisfies a recursive relation.The recursion below generalises the recursion for the scale function derived in [3] and has also been discussed in [16].
Corollary 1.For b 1, the scale matrix W(•), defined in Theorem 2, satisfies the following recursive equation Proof.To prove the recursive relation, consider the two-sided hitting probability P τ + a < τ − −b ; J τ + a and condition on the first time step.Then, for a, b 1, we have where the last equality follows from the Markov additive property.Further, using Theorem 2 and multiplying on the right by W(a + b), the above expression can be re-written as and the recursive expression given in Eq. (3.14) follows directly after some basic algebraic manipulations.For W(1), recall Remark 7 that L + (1) = (I − Π 1 ) −1 and also that Remark 8.Under the same line of logic as Remark 5, we recall that the above results are more general than explicitly stated.For example, by superimposing killing Eq. (3.5) is equivalent to for v ∈ (0, 1], where W v (•) is defined in Eq. (3.4) with the rest of the results amended accordingly.

Two-sided exit theory for arbitrary A 1
In Theorem 2, we rely on the fact that A 1 is non-singular, which in turn ensures G is non-singular by Remark 7.However, it turns out that a similar result can also be derived for arbitrary A 1 in terms of matrices closely related to the W scale matrix.
To see this, let us define ) and recall R is related to the 'time-reversed' counterpart of G (see Remark 5).Then, we have the following theorem.Theorem 3. Assume the matrix A 1 is singular.Then, there exists a matrix V : N → R N ×N with V(0) = I, which is invertible and satisfies where Furthermore, it holds that and for z ∈ (0, 1] such that z / ∈ Λ( G), also Proof.Assume now that the matrix G is singular (which, by Remark 7, is equivalent to the requirement that the matrix A 1 is singular).Then, from equation (3.9) we can obtain an alternative representation for the two-sided exit probability of the form where for n 0, as long as this matrix is invertible (see below).Moreover, by similar arguments as in Eq. (3.13), it follows that Now, although we do not discuss in much details here the definition and probabilistic interpretation of the matrix R, [16] explain that the matrix R n comprises of i, j-th elements representing the expected number of visits to level n 0 in phase j before the first return to the level 0, given X 0 = 0 and J 0 = i.Hence, using this interpretation, we observe that and therefore, straightforward calculations show that Eq. (3.16) holds for V(n) = L − (n) as long as this matrix is invertible for all n 0. Note that this can easily be verified by employing the same argument as in (i) of Remark 7 for n 0 and considering Π −n instead of Π n .
To prove Eq. (3.17), we use similar arguments as [16] and employ the Markov property to obtain and, in particular, L − (1) = M(0), from which the result follows directly.Finally, to prove the transform in Eq. (3.18), we again follow the methodology of [16] and first note that by conditioning on the first time period, for n 1, we have whilst, for n = 0, it follows that Taking transforms on both sides of Eq. (3.19) and noting the above expression for M(0), after some algebraic manipulations (see Appendix), we obtain where in the last equality we have use the probabilistic interpretation of R to note that R = A 1 L − (1) = A 1 M(0).The result follows directly by solving the above expression for the transform and holds as long as I − F(z) is invertible.
Although the result of Theorem 3 is clearly more general than that of Theorem 2, as it does not require invertibility of A 1 , it deviates from the well known form and methodology of scale matrices (functions) seen throughout the literature.As such, since the purpose of this paper is to demonstrate and derive the fully discrete analogue of the well known 'scale theory' for MACs, we will assume the invertibility of A 1 throughout the rest of this paper but point out that all the following results could also be generalised to the arbitrary case (see [16] for more details of such results in the continuous-time setting).At this point it is natural to consider the corresponding downward exit problems (one and two-sided).However, in order to do this we must first discuss some fluctuation problems for the associated 'reflected' MAC process which is discussed in the following section.

Exit Problems For Reflected MACs
In this section, we deviate from the basic MAC described above and consider the associated two-sided reflection of the process {X n } n 0 with respect to a strip [−d, 0] with d > 0. The choice of strip is purely for notational convenience and can easily be converted to the general strip [−b, a] by shifting the process appropriately.The main result of this section is given in Theorem 4 which is interesting in its own right, but is also used to derive the aforementioned downward exit problems of the original (un-reflected) MAC.
Following the same line of logic as in [15], let us define the reflected process by where R − n and R + n are known as regulators for the reflected process at the barriers −d and 0, respectively, which ensure that the process {H n } n 0 remains within the strip [−d, 0] for all n ∈ N. Note that in continuous-time and space, the reflected process {H n } n 0 corresponds to the solution of the so-called Skorohod problem (see [23]).By the construction of {H n } n 0 , it is clear that {R − n } n 0 and {R + n } n 0 are both non-decreasing processes, with R − 0 = R + 0 = 0, when X 0 in [−d, 0], which only increase during periods when H n = −d and H n = 0, respectively.Moreover, since {X n } n 0 is 'spectrally negative' the upward regulator {R + n } n 0 increases by at most one per unit time.Now, let us denote by ρ k , the right inverse of the regulator {R + n } n 0 , defined by ] and non-negative jumps within the level process {R − ρ k } k 0 .Thus, in a similar way as for the original MAC (X, J), we can define its p.g.m. , given X 0 = 0, by Remark 9.In the continuous case, X 0 = 0 is a regular point on (0, ∞) and thus, it follows that ρ 0 = 0 a.s. and thus E z R − ρ 0 ; J ρ 0 = I (see [15] for details).However, in the fully discrete set-up, we have already mentioned that R − ρ 0 is random for X 0 = 0 and is due to the possibility of the process experiencing a negative jump in the first time period such that ρ 0 = 0.Moreover, the process may drop below the lower level −d (resulting in a jump in {R − n } n 0 ) before the stopping time ρ 0 , and justifies the choice of the p.g.m.E z R − ρ 0 ; J ρ 0 above, compared to E z R − ρ 1 ; J ρ 1 in the continuous case (see [15]).On the other hand, we note that if X 0 = 1, then E 1 z R − ρ 0 ; J ρ 0 = I, since R + 0 = 1, and thus ρ 0 = 0.The latter observation will play a crucial role in analysing the distribution of (R − ρ 0 , J ρ 0 ), which is given in the following theorem in terms of the second v-scale matrix, denoted Z v , and defined for z ∈ (0, 1] and v ∈ (0, 1], by with Z v (z, 0) = I, for all z ∈ (0, 1] and v ∈ (0, 1] and Z 1 (z, n) =: Z(z, n).
Proof.The proof of this theorem actually follows a similar line of logic as the proof of Theorem 1 however, due to the nature of the reflected process, the calculations require greater attention.First note that since H ρ k = 0 for each k ∈ N, we have a MAC having unit (upward) drift and downward jumps described by Moreover, its occupation mass in the bivariate state (y, j) ∈ Z × E is defined by L * (y, j, ∞) = ∞ k=0 1 (Xρ k =y,Jρ k =j) and thus, from the occupation mass formula in Eq. (2.5), we have Taking expectations on both sides of this expression, conditioned on the initial state X 0 = x ∈ [−d, 1], and writing in matrix form yields where L * x (m, ∞) is the infinite-time occupation matrix with i, j-th element given by Let us now treat the left-hand side and right-hand side of Eq. (4.5) separately.Firstly, using the fact that ρ k , along with the strong Markov and Markov additive properties of {R − ρ k } k 0 , the l.h.s. of Eq. (4.5) can be re-written in the form for all z ∈ (0, 1] such that z > (ρ( F * (z)).We note that since {(X ρ k , J ρ k )} k 0 is a MAC, it holds that E(z −Xρ k ; J ρ k ) = E(z −Xρ 0 ; J ρ 0 ) k+1 .Now, let us define τ 1 = inf{ρ k 0 : 1} and G to be the probability transition matrix such that P(τ 1 < ∞, J τ 1 ) = G, which is sub-stochastic, (implying ρ(G) < 1) in the case of killing or no killing and negative drift.Then, based on similar arguments as those discussed in the proof of Theorem 2, since the eigenvalues of G coincide with the roots of I − E(z −Xρ 0 ; J ρ 0 ) = (I − z −1 F * (z)), then we conclude that I − z −1 F * (z) is invertible for z ∈ (ρ(G), 1].In fact, since {X n } n 0 is an upward skip-free process, it follows that τ 1 = τ 1 for X 0 ∈ [−d, 1], which implies G = G, and thus I − z −1 F * (z) is invertible for z ∈ (ρ( G), 1].Hence, by applying the same analytic continuation argument as in Theorem 2, the above expression holds for z ∈ (ρ( G), 1).Now, for the r.h.s. of Eq. (4.5), let us introduce the matrix quantity C −y whose individual i, j-th elements denote the probability of the process {X n } n 0 first hitting some level −y < 0 from initial states X 0 = 0 and J 0 = i, and then hitting the upper level (d + 1) − y whilst J n = j, such that Using this quantity, it is possible to show that for To see this, note that L * (m, j, ∞) corresponds to the (local) time points ρ k (increases in {R + n } n 0 ) such that X ρ k = m and J ρ k = j, or alternatively, time points k 0 for which {R + n } n 0 is increasing and X k = m and J k = j.Then, for m > 0, the first increase of L * (m, j, ∞) is at τ m , otherwise, for m 0, {X n } n 0 has to first visit the state (level) m − (d + 1) to ensure that at the next time the process {X n } n 0 visits the level m < 0, the 'reflected process' {H n } n 0 was at its upper boundary in the previous time period (H n−1 = 0), resulting in an increase of {R + n } n 0 .Every subsequent increase of L * (m, j, ∞) is obtained in a similar way.Thus, the above equation follows by application of the strong Markov and Markov additive properties.
Taking transforms on both sides of the above equation, it yields where we have used the fact that and thus, after some algebraic manipulations (see Appendix), Eq.(4.8) can be re-written as where Z(z, n) is defined in Eq. (4.3).Finally, by combining Eqs.(4.6) and (4.9), we obtain for z ∈ (ρ( G), 1) To complete the proof, it remains to determine the form of the matrix F * (z).To do this, let x = 1 into the above expression which, after using the fact that E 1 z R − ρ 0 ; J ρ 0 = I since in this case ρ 0 = 1 and taking inverses on both sides, gives Note that this expression shows that Z(z, d + 1) is an invertible matrix as long as W(d + 1) is invertible and after solving w.r.t.F * (z) also gives The result follows by substituting the above expression for F * (z) back into Eq.(4.10), re-arranging and employing analytic continuation in a similar way as previous.
Remark 10.We point out that setting X 0 = x = 0 in the result of Theorem 4, gives an equivalent representation for F * (z) in terms of the Z scale matrix only, i.e., Moreover, we note that based on its definition, it is also possible to use the recursive relation of W(•), given in Corollary 1, to obtain explicit values of Z(z, •).
Although the result of Theorem 4 is interesting in its own right, its main importance in this paper is as a stepping stone for proving a similar result for the associated one-sided reflected process (see Section 4.1 below) and consequently, the two-sided and one-sided (as a limiting case) downward exit problems for the original (non-reflected) MAC.

One-Sided Reflection
As discussed in the previous section, the downward exit problems can be solved using an auxiliary result for the one-sided (lower) reflected process.As such, let us define denotes a lower reflecting barrier at the level −b 0. Note that this is equivalent to shifting the two-sided reflected process of the previous section and letting the upper reflecting barrier tend to infinity.Then, by direct application of Theorem 4 we get the following corollary.

Downward Exit Problems
For the one and two-sided downward exit problems, we are interested in the events {τ − −b < ∞} and {τ − −b < τ + a }, respectively.Unlike the upward exit, due to the possibility of downward jumps in the MAC, the stopping time τ − −b is not necessarily equivalent to the first hitting time of the level −b < 0, i.e., τ − −b = τ −b .It is for this reason that we cannot employ the Markov type structure seen for the upward exit identities and, instead, rely on the results of the reflected processes of the previous section.
Although it would appear easier to derive in the first instance, it turns out that the one-sided downward exit problem can easily be obtained as a limiting case of the related two-sided case and as such, is considered in the following.Re-arranging this expression and using the identities of Theorem 2 and Corollary 2 the result follows immediately.

One-Sided Exit Downward
For the one-sided exit problem, we are now interested in the event that of down-crossing the level −b < 0, whilst the upward movement of the MAC is un-restricted, i.e., {τ − −b < ∞} which, as already mentioned, can be viewed as a limiting case of the corresponding twosided problem as a → ∞.In fact, this is the argument used to obtain the following one-sided downward exit identity.
Corollary 4. Assume we are not in the case of no-killing and zero drift, i.e., it is not true that both v = 1 and κ ′ (1) = 0.Then, L is invertible and, for z ∈ (0, 1] such that z / ∈ Λ( G) and b > 0, we have (5.1) Proof.Firstly, the invertibility of L follows from Remark 3, for which it cannot hold that both v = 1 and κ ′ (1) = 0. On the other hand, Eq. (5.2) follows from taking the limit of the two-sided case (see Corollary 3) as the upper barrier tends to infinity, i.e., a → ∞.In order to evaluate the value of the limit of W(b) W(a + b) −1 Z(z, a + b − 1) as a → ∞, note that by the definition of the scale matrix Z(z, n), and using Eq. for z ∈ (0, γ), where L is the infinite time occupation mass matrix defined in Proposition 1. Finally, by analytic continuation, it can be shown that the above holds for all z ∈ (0, 1] such that z / ∈ Λ( G) and thus, by taking the limit as a → ∞ in Corollary 3, using the above expressions and re-arranging, we obtain the result.
Remark 11.We point out once again that by explicitly imposing killing, Corollary 3 and consequently Corollary 4 equivalently yield the following joint transforms for v ∈ (0, 1] where W v (•) and Z v (z, •) are defined as in Eqs.(3.4) and (4.3), respectively.
∞ k=0 C −(d+1) k = I − C −(d+1) −1 in the presence of killing, since C −(d+1) is a sub-stochastic matrix and thus, its Perron-Frobenius eigenvalue is less than 1.Now, the first term inside the brackets of the last expression is clearly equivalent to − I − z G −1 −1 G −x for all z ∈ (ρ( G), 1], whilst by the change of variable k = m − (d + 1) − x, the second term within the brackets becomes z

5. 1 Corollary 3 .
Two-Sided Exit Downward -{τ − −b < τ + a }For the two-sided downward exit problem, we are interested in the time of exiting the fixed 'strip', [−b, a], such that {τ − −b < τ + a }.Using the result for the transform of the downward regulator for the one-sided reflected process, we obtain the following corollary.For z ∈ [0, 1] such that z / ∈ Λ( G), it holds that for any a, b > 0, we haveE z −X τ − −b ; τ − −b < τ + a , J τ − −b = z b−1 Z(z, b − 1) − W(b) W(a + b) −1 Z(z, a + b − 1) .Proof.Consider the one-sided reflected process of Section 4.1.Then, by the strong Markov and Markov additive properties, it follows that for b > 0, we have