Analysis of Infinite Dimensional Dynamical Systems by Set-Oriented Numerics Von der Fakultät für Elektrotechnik, Informatik und Mathematik der Universität Paderborn zur Erlangung des akademischen Grades Doktor der Naturwissenschaften – Dr. rer. nat. – genehmigte Dissertation von Adrian Ziessler Paderborn 2018 Gutachter: Prof. Dr. Michael Dellnitz Prof. Dr. Oliver Junge Tag der mündlichen Prüfung: 17.05.2018 Acknowledgements I would like to thank all those people who made this thesis possible and an unforgettable experience for me. First of all, I would like to express my gratitude to my advisor Prof. Dr. Michael Dellnitz for all his guidance, enthusiasm and support. I am especially grateful for his patience he has shown while introducing me to the field of(infinite dimensional) dynamical systems and set-oriented numerics. I am grateful to Prof. Dr. Oliver Junge from the Technical University of Munich for all the fruitful discussions we had during his visits in Paderborn. His continuous work on GAIO made it easy for me to develop the set-oriented methods introduced in this thesis. I would also like to thank my past and present colleagues at Paderborn University: Dr. Sebastian Peitz, Katharina Bieker, Bennet Gebken, Raphael Gerlach, Dr. Mirko Hessel-von Molo, Marianne Kalle, Jasmin Krauß and Dr. Karin Mora. It was always a very pleasant and enjoyable work atmosphere. Pozdrawiam serdecznie moich rodziców Urszulę i Leszka także moją siostrzyczkę Stelle. Dziekuję wam bardzo za wszystkie wsparcie którę otrzymałem w ostatnich lat. Wasz syn/brat kocha was bardzo. Many thanks to my friends from Stammtisch for all the fun we had together in the last years. I am looking forward to many more awesome years! Finally, I would like to thank Sarah Wiegand. Life has never been so excited before I met you and I thank you for every single day we spend together. I am very excited to start a new stage of life with you, Finn and Flocke. I Abstract One central goal in the analysis of dynamical systems is the characterization of long term behavior of the system state. To this end, the so-called global attractor is of interest, that is, an invariant set that attracts all the trajectories of the underlying dynamical system. Over the last 20 years so-called set-oriented numerical methods have been developed that allow to compute approximations of invariant sets. The basic idea is to cover the objects of interest, for instance attractors or unstable manifolds, by outer approximations which are created via subdivision techniques. However, the applicability of those techniques is restricted to finite dimensional dynamical systems, i.e. ordinary differential equations or discrete dynamical systems. In the first part of this thesis, we will extend the set-oriented numerical methods which are available in the software package GAIO(Global Analysis of Invariant Objects) to the infinite dimensional context. With those extensions we will be able to compute finite dimensional invariant sets for infinite dimensional dynamical systems, e.g. for delay and partial differential equations. The idea is to utilize infinite dimensional embedding techniques in our numerical treatment. This will allow us to construct a finite dimensional dynamical system, the so-called core dynamical system(CDS), on an appropriately chosen observation space. Using the CDS, we then can approximate finite dimensional embedded attractors or embedded unstable manifolds. Furthermore, we will be able to compute approximations of the embedded invariant measure in the observation space which gives a statistical description of the dynamical behavior of the infinite dimensional dynamical system. In the second part of this thesis we will first construct a numerical realization of the CDS for delay differential equations with(small) state dependent time delay. Using the set-oriented techniques introduced in the first part of this thesis, we will compute several embedded invariant sets and invariant measures, e.g. for the well-known Mackey-Glass equation representing a model of blood production. Analogously, we will present a numerical realization of the CDS for partial differential equations, where we will show approximations of embedded unstable manifolds of the onedimensional Kuramoto-Sivashinsky equation. In contrast to the finite dimensional setting, the numerical effort of the set-oriented techniques for the computation of embedded invariant sets not only depends on a combination of both the dimension of the invariant set and the dimension of the III underlying space, but also on the efficient numerical realization of the selection step in the subdivision algorithm or the continuation step in the continuation algorithm, respectively. To this end, in the third part of this thesis, we will present modifications for the subdivision and the continuation method. In particular, those modifications will allow us to compute invariant sets of infinite dimensional dynamical systems more efficiently. IV Zusammenfassung Ein zentrales Ziel in der Analyse dynamischer Systeme ist die Charakterisierung des Langzeitverhaltens der Systemzustände. Zu diesem Zwecke interessiert man sich für den sogenannten globalen Attraktor, welcher eine invariante Menge beschreibt, die alle Trajektorien des zugrundeliegenden dynamischen Systems anzieht. In den letzten 20 Jahren wurden mengenorientierte numerische Verfahren entwickelt, die es erlauben invariante Mengen zu approximieren. Die Idee hierbei ist, die für uns interessanten Objekte, wie z. B. Attraktoren oder instabile Mannigfaltigkeiten, approximativ mit Boxen zu überdecken. Dies wird mit sogenannten Unterteilungstechniken realisiert. Jedoch sind diese Verfahren bislang nur für endlich-dimensionale dynamische Systeme definiert, wie zum Beispiel gewöhnliche Differentialgleichungen oder diskrete dynamische Systeme. Im ersten Teil dieser Dissertation werden wir die klassischen Techniken, welche im Software-Paket GAIO(Global Analysis of Invariant Objects) implementiert sind, auf unendlich-dimensionale Systeme erweitern. Diese Erweiterung wird es uns erlauben endlich-dimensionale invariante Mengen unendlich-dimensionaler dynamischer Systeme, zum Beispiel retardierter oder partieller Differentialgleichungen, zu berechnen. Grundlage unserer Erweiterung auf unendlich-dimensionale Systeme wird ein Einbettungsresultat sein, welches uns erlaubt ein endlich-dimensionales dynamisches System, das sogenannte core dynamical system(CDS), im Einbettungsraum (im Weiteren Beobachtungsraum genannt) zu konstruieren. Das CDS wird es uns erlauben endlich-dimensionale eingebettete Attraktoren oder eingebettete instabile Mannigfaltigkeiten zu approximieren. Des Weiteren werden wir auch in der Lage sein eingebettete invariante Maße zu berechnen, welche uns eine statistische Beschreibung des dynamischen Verhaltens der zugrundeliegenden unendlich-dimensionalen dynamischen Systeme liefern. Im zweiten Teil dieser Arbeit werden wir zuerst eine numerische Realisierung des CDS für retardierte Differentialgleichungen mit(kleiner) zustandsabhängiger Totzeit konstruieren. Dies erlaubt uns die Algorithmen aus dem ersten Teil dieser Dissertation auf unterschiedliche Beispiele anzuwenden, um eingebettete invariante Mengen, oder invariante Maße zu berechnen. Als ein Beispiel betrachten wir dabei die Mackey-Glass Gleichung, welche ein Model für die Blutproduktion beschreibt. Analog dazu präsentieren wir eine numerische Realisierung des CDS für partielle Differentialgleichungen und zeigen Approximationen eingebetteter instabiler Mannigfaltigkeiten der eindimensionalen Kuramoto-Sivashinsky Gleichung. V Im Unterschied zu endlich-dimensionalen dynamischen Systemen hängt der numerische Aufwand der mengenorientierten Techniken für die Berechnung eingebetteter invarianter Mengen nicht nur von der Dimension der invarianten Menge und der Dimension des zugrundeliegenden endlich-dimensionalen Raumes ab, sondern ebenfalls von der numerischen Realisierung des Auswahlschrittes im Unterteilungsalgorithmus, bzw. des Fortsetzungsschrittes im Fortsetzungsalgorithmus. Zu diesem Zweck werden wir im dritten Teil dieser Arbeit verschiedene Modifikationen sowohl für das Unterteilungsverfahren als auch für das Fortsetzungsverfahren vorstellen. Insbesondere werden es uns diese Modifikationen erlauben die invarianten Mengen effizienter zu berechnen. VI Contents 1 Introduction 1 2 Classical set-oriented techniques 11 2.1 Theoretical background.......................... 11 2.2 The subdivision algorithm........................ 13 2.3 The continuation method......................... 19 2.4 Computation of invariant measures................... 24 2.4.1 Stochastic transition functions and probability measures... 24 2.4.2 The transfer operator....................... 27 2.4.3 Numerical approximation of invariant measures........ 28 2.4.4 Convergence result........................ 30 3 From finite to infinite dimensional embeddings 37 3.1 Taken’s embedding theorem....................... 37 3.2 Extension to fractal sets......................... 39 3.2.1 Prevalence............................. 39 3.2.2 Box-counting dimension..................... 40 3.3 Infinite dimensional embedding theory................. 45 3.3.1 The thickness exponent...................... 46 3.3.2 An infinite dimensional embedding result for linear maps... 48 3.3.3 An infinite dimensional delay embedding result........ 49 4 The core dynamical system 53 5 Set-oriented techniques for embedded invariant sets 57 5.1 Extension to continuous dynamical systems.............. 57 5.2 Computation of embedded attractors via subdivision......... 59 5.3 Continuation for embedded unstable manifolds............. 63 6 Applications 67 6.1 Delay differential equations........................ 67 6.1.1 Numerical realization of the delay-coordinate map R..... 69 6.1.2 Numerical realization of the map E............... 71 6.1.3 Examples............................. 73 6.2 Partial differential equations....................... 81 6.2.1 Numerical realization of the observation map R........ 82 VII Contents 6.2.2 Numerical realization of the map E............... 85 6.2.3 Examples............................. 87 7 Improving the numerical efficiency 93 7.1 A modified selection step for the subdivision algorithm........ 94 7.2 Development of a sequential procedure................. 99 7.2.1 Computation of new observations................ 102 7.2.2 Creating a new box covering................... 105 7.3 Koopman operator based continuation step............... 106 7.3.1 The Koopman operator...................... 108 7.3.2 The trust region.......................... 117 7.3.3 Examples............................. 120 8 Conclusion and outlook 129 8.1 The core dynamical system........................ 129 8.2 Set-oriented techniques for embedded invariant sets.......... 130 8.3 Improving the numerical efficiency.................... 130 8.4 Future work................................ 131 Bibliography 133 VIII 1 Introduction Dynamical systems find a wide range of applications for everyday processes such as the flocking of birds, population dynamics, or weather forecasting. Moreover, they allow further insight into areas not only in mathematics but also, e.g. in biology(predator-prey models[May74]), economics[Tu12], or physics(climate models [JBC ` 07]). A special class are autonomous dynamical systems, where the progression only depends on the initial state but not on the time(cf.[KH97, GH13] for a detailed introduction). If it also depends on the initial time then the dynamical system is called nonautonomous. In this thesis we will only consider autonomous dynamical systems. These systems are often modeled by differential equations with finite or infinite dimensional state space. Typical states, e.g. in mechanical systems, are the position coordinates and velocities of mechanical parts. One central goal in the analysis of dynamical systems is the characterization of long term behavior of the system state. To this end, the so-called global attractor, i.e. an invariant set that attracts every trajectory of the underlying dynamical system is of interest. The global attractor also contains all unstable manifolds, which have a crucial influence on the complexity of the system’s dynamics[GH13]. In general, the computation of(global) attractors or unstable manifolds by direct simulation is not sufficient(except in very special cases, e.g. a linear system with a stable fixed point). Thus, in order to approximate invariant sets dedicated algorithms are required for this task. Geometrical approaches can be used to approximate(two dimensional) unstable manifolds of vector fields(see[KOD ` 05] for an overview). The approximation by geodesic level sets, for instance, produces a regular mesh that consists of geodesic circles by solving appropriate boundary value problems. Recently, a variational approach has been developed, where an appropriate distance function between a suitably selected finite set of points and its image under the dynamics has to be minimized[JK17]. With this approach invariant sets of arbitrary topology, dimension, and stability can be approximated. Furthermore, socalled set-oriented numerical methods have been developed over the last two decades (cf., e.g.[DH96, DH97, DJ99, FD03, FLS10]). Here, the basic idea is to cover the objects of interest such as attractors, invariant manifolds or almost invariant sets by outer approximations which are created via subdivision techniques. The numerical effort depends essentially on the dimension of the global attractor. For instance, it is easier to compute a one-dimensional attractor in a ten-dimensional space than to compute a three-dimensional attractor in a four-dimensional space [DH97]. The set-oriented techniques have been used in several different application 1 1 Introduction areas such as molecular dynamics([DDJS99, SHD01, DGM ` 05]), astrodynamics ([DJL ` 05, DJK ` 05, DJ06]) or ocean dynamics([FHR ` 12]). Recently, a set-oriented numerical methodology has been developed which allows to perform uncertainty quantification for dynamical systems from a global point of view[DKZ17]. All setoriented algorithms are implemented in the dynamical systems software package GAIO(Global Analysis of Invariant Objects), which is available for MATLAB(see https://github.com/gaioguy/GAIO )[DFJ01]. The methodologies discussed above are restricted to dynamical systems modeled by differential equations with finite dimensional state space, i.e. ordinary differential equations or discrete dynamical systems. However, in the case of infinite dimensional dynamical systems, the approximation of(global) attractors is much more complicated than in the finite dimensional case. Numerically, each state in the infinite dimensional space can be discretized in a(possibly very) high-dimensional space leading to a finite dimensional dynamical system. Therefore, the applicability of the set-oriented techniques is no longer feasible. Instead, one can perform longterm simulations of arbitrary initial states in order to gain information about the dynamical behavior. However, if one is interested in the computation of finite dimensional invariant sets of infinite dimensional dynamical systems, i.e. the analysis of the(global) long-term behavior, simple numerical integration of the flow is not sufficient. One can combine, for instance, topological tools like the Conley index with rigorous computations based on computational homology for the global analysis[DJM04] of such systems. Furthermore, for many infinite dimensional dynamical systems a finite dimensional so-called inertial manifold exists to which trajectories are attracted exponentially fast[CFNT88, FJK ` 88, Tem97]. Roughly speaking, an inertial manifold determines how an infinite number of degrees of freedom are completely controlled by only a finite number of degrees of freedom. To this end, it suffices to study the dynamics on the inertial manifold which can be described by an appropriate reduced order model(ROM). Such a ROM can, e.g. be constructed via a Galerkin expansion which yields an ordinary differential equation in a finite (but possibly still high) dimensional state space. One typical application scenario in which finite dimensional invariant sets in infinite dimensional dynamical systems arise includes, for instance, the analysis of delay differential equations(DDEs) with small time delay([Dri68, Chi03, CMRV05]). DDEs are also called time-delay systems and compared to ordinary differential equations the time derivative of the unknown function not only depends on the current state but also on previous times[Kua93]. Hence, in order to compute a solution of a DDE an initial history over a time interval, and thus an initial function, has to be known. DDEs are of particular interest when more realistic models are required in which time-delayed aftereffects have to be considered, e.g. for applications in population dynamics, epidemiology and mechanics[Kua93, CR00, AHD07, KM13]. The Mackey-Glass equation[MG77], for instance, is a model of the concentration of 2 blood production at a specific time which also depends on the concentration at an earlier time, i.e. when the request for more blood is made. Another typical application scenario includes the analysis of certain types of dissipative dynamical systems described by partial differential equations(PDEs), including the Kuramoto-Sivashinsky equation[FNST86, JKT90, Rob94], the GinzburgLandau equation[DGHN88], or a scalar reaction-diffusion equation with a cubic nonlinearity[Jol89]. In all these cases, a finite dimensional inertial manifold exists, see e.g.[CFNT88, FJK ` 88, Tem97]. The Kuramoto-Sivashinsky equation, for instance, has attracted a lot of interest as a model for complex spatio-temporal dynamics and has been derived in the context of several extended physical problems, e.g. phase dynamics in reaction-diffusion systems[KT76] or small thermal diffusive instabilities in laminar flame fronts[Siv77]. It is also a great paradigm for finite dimensional dynamics in a PDE and thus can be described by an appropriate finite dimensional dynamical system, e.g. obtained by a Galerkin expansion. Besides Galerkin expansions, another approach to construct a ROM of an infinite dimensional dynamical system is by means of the Koopman operator[Koo31]. This operator is a linear but infinite dimensional operator whose modes and eigenvalues, which are associated with a fixed oscillation frequency and growth/decay rate, capture the evolution of observables describing any, even nonlinear, dynamical system[RMB ` 09, TRL ` 14]. In the Koopman operator context the observables are real valued functions of the state. The spectral properties of the Koopman operator(see e.g.[Mez05, BMM12]) play a crucial role in analyzing the underlying infinite dimensional dynamical system and can, e.g. be used to analyze fluid flows[RMB ` 10, Mez13]. Furthermore, other application scenarios where Koopman operator theory can be applied are power system technologies[SMRH16] or optimal control of PDEs[BBPK16, PK17]. Since the Koopman operator is infinite dimensional, one first has to compute an appropriate approximation. To this end, data driven methods that approximate the leading Koopman eigenfunctions, eigenvalues and modes from a data set of successive snapshots can be used. The extended dynamic mode decomposition(EDMD)[WKR15, KKS16, KNK ` 18] is one such algorithm and an extension of the dynamic mode decomposition(DMD) (cf.[Sch10, TRL ` 14, AM17]). The convergence of EDMD towards the Koopman operator has recently been proven in[KM18]. The Koopman operator approach is particularly suited to be applied to sensor measurements or in the case where the underlying system dynamics are unknown. If one, however, is interested in the reconstruction of attractors or the qualitative dynamics from(experimental) data or measurements the general idea of embedding theory can be used[BK86]. The first result on embeddings in the dynamical systems context is Taken’s embedding theorem[Tak81] based on Whitney’s embedding theorem[Whi36]. Whitney’s embedding theorem states that a generic map from a d-dimensional manifold to a p 2 d ` 1 q -dimensional Euclidean space is an embedding. In particular, this 3 1 Introduction means that the map is injective. In the context of Whitney’s embedding theorem the 2 d ` 1 independent measurements(observations) can be considered as a map. Takens has shown that a compact manifold of dimension d of a finite dimensional dynamical system can generically be embedded using a delay-coordinate map, which consists of observations of the dynamical behavior at an appropriate number(at least 2 d ` 1) of consecutive snapshots in time. The main difference to Whitney’s embedding theorem is that we only need time-delayed versions of one generic observation in order to embed the d-dimensional manifold. Taken’s embedding theorem has, e.g. been applied to predict chaotic time series, also in combination with neural networks[FS87, Cas89, PRK92]. For the latter, in order to model the dynamics of the system that produced the signal, the first step is to reconstruct the attractor of the system by using Taken’s embedding theorem and then train an artificial neural network to predict time series over long time periods[PRK92]. The notion of a generic property in Taken’s theorem means, roughly speaking, that the set of embeddings is an open and dense set of smooth maps. The first statement means that each embedding with an arbitrarily small perturbation is still an embedding, whereas the second statement means that every smooth map, whether it is an embedding or not, is arbitrarily near an embedding[SYC91]. From an experimentalist’s point of view, however, it is desirable to know if the particular map that results from analyzing the experimental data is an embedding with probability one. The problem with such a statement is that the space of all smooth maps is infinite dimensional. The notion of probability one on infinite dimensional spaces does not have an obvious generalization from finite dimensional spaces. There is no measure on a Banach space that corresponds to the Lebesgue measure on finite dimensional subspaces[HSY92]. In[HSY92] the authors propose a measure-theoretic condition for a property to hold“almost-everywhere” on an infinite dimensional vector space, the so called prevalence. This property has been used by Sauer et al. in[SYC91], where Taken’s theorem has been extended to the context of compact invariant sets of box-counting dimension d, i.e. invariant sets which have a fractal dimension. There it has been shown that the same observation map can be used for the reconstruction of the invariant set as long as more than 2 d consecutive snapshots in time are used. Moreover, the notion of genericity has been replaced by prevalence. Finally, Robinson extended the results obtained in[SYC91] to dynamical systems on infinite dimensional Banach spaces[Rob05]. Robinson’s main result is based on the work by Hunt& Kaloshin[HK99], where an infinite dimensional embedding result for linear maps has been proven. It turns out that in Robinson’s embedding theorem, the same observation map as defined in Taken’s theorem can be used to reconstruct invariant sets of finite dimensional box-counting dimension d. However, in addition to the dimension of the set, it is necessary to know how well the invariant set can be approximated by finite dimensional subspaces of the underlying Banach space. This information is encoded in the so-called thickness exponent σ. Therefore, 4 the lower bound on the number of snapshots, 2 d, has to be replaced by 2 p 1 ` σ q d. Friz& Robinson have shown that in some sense the thickness exponent is inversely proportional to smoothness[FR99]. More precisely, they show that global attractors, which are uniformly bounded in the Sobolev spaces H s for all s ą 0, have thickness exponent zero. As a consequence, in certain settings the attractor of the NavierStokes equations has thickness exponent zero. In addition, Ott, Hunt& Kaloshin suspected that many of the attractors arising in dynamical systems defined by the evolution equations of mathematical physics have thickness exponent zero[OHK06]. In summary, even for infinite dimensional dynamical systems, we can generically embed finite dimensional invariant sets by using an appropriate number of single measurements of its state. There are several further extensions of Taken’s theorem. For instance in[Sta99] forced systems are considered, and in[MB04] one can find a stochastic version of this result. In[Rob08, CLR13] embedding results for infinite dimensional nonautonomous dynamical systems have been introduced. The main goal of this thesis is to present set-oriented numerical methods that allow us to compute approximations of finite dimensional invariant sets of infinite dimensional dynamical systems. Our results in this thesis are based on Hunt& Kaloshin’s and Robinson’s embedding theorems. In particular, they allow us to embed invariant sets, in what follows called embedded invariant sets, into finite dimensional spaces. Assuming that a bound on the box-counting dimension and the thickness exponent of the invariant set are known we use the observation map and its inverse to define a finite dimensional dynamical system ϕ, the core dynamical system(CDS), in the observation space of dimension k ą 2 p 1 ` σ q d. If d and σ are sufficiently low such that k ď 7 we can apply set-oriented techniques developed in this thesis in order to compute the embedded invariant set for ϕ. Otherwise, the computation becomes too expensive and we can only compute projections of the embedded invariant set, which can not be guaranteed to be one-to-one(cf.[SYC91] for a discussion on this topic). The first method developed in this thesis is based on[DH97], and is a set-oriented subdivision method for the approximation of parts of the embedded(global) attractor, called the relative global attractor. Starting with a box covering in the observation space in which we want to analyze the observations of the dynamical behavior of the infinite dimensional dynamical system, we successively subdivide all boxes and delete those boxes which do not contain any part of the relative global attractor[DHZ16]. We repeat these steps until the desired accuracy of the approximation is obtained. In principle we use a combination of cell mapping techniques [Hsu87] with a subdivision procedure. The second method developed in this thesis is a set-oriented continuation method based on[DH96], which allows us to approximate embedded unstable manifolds that are also part of the relative global attractor. This method operates locally in the sense that we start the computation at an unstable fixed point in observation space 5 1 Introduction and compute parts of the embedded unstable manifold by a continuation procedure [ZDG18]. As a final step, we can use these coverings obtained by the set-oriented methods in order to determine a statistical description of the dynamical behavior encoded in the underlying invariant measure. To this end, we can use the results obtained in[DJ99] in a straightforward manner and compute an approximation of the transfer operator on the box covering obtained by our set-oriented techniques. The transfer operator, also known as the Perron-Frobenius operator, is a classical mathematical tool for the numerical analysis of complicated dynamical behavior, see e.g.[DJ99, SHD01, FP09, Kol10] and it was recently also used for the analysis of dynamical systems with uncertain parameters[DKZ17] or in the context of stochastic differential equations[FK17]. By definition, the invariant measure is a fixed point of the transfer operator. The numerical approximation of the transfer operator yields a stochastic matrix P N and its eigenvector corresponding to the eigenvalue one approximates the embedded invariant measure. Observe that in contrast to classical Galerkin schemes where the approximation quality is controlled by the number of modes and hence the dimension of the Galerkin space, we can always perform the numerical approximation of the invariant sets via the set-oriented techniques within a finite dimensional space of fixed dimension k. While this is in principle also possible for infinite dimensional dynamical systems possessing a finite dimensional inertial manifold, our method works without any a priori identification of determining modes for the inertial manifold. However, the drawback of our method is that to guarantee that the observation map is one-toone, we need to choose k sufficiently large, i.e. preferably sharp estimates of the upper box-counting dimension d and the thickness exponent σ have to be known a priori. This thesis is organized as follows: in Chapter 2 and 3 the mathematical concepts used throughout this thesis will be introduced. In Chapter 2 the focus lies on set-oriented techniques, where we first briefly introduce some of the basic notions on invariant sets of dynamical systems. Furthermore, we review the contents of [DH97, DH96], i.e. the subdivision and continuation method for the approximation of relative global attractors and unstable manifolds, respectively. The chapter concludes with an introduction to transfer operators and the numerical approximation of invariant measures[DJ99]. Chapter 3 gives a detailed overview of finite dimensional as well as infinite dimensional embedding results. The aim of this chapter is to present Robinson’s infinite dimensional embedding result that allows us to embed a finite dimensional invariant set of an infinite dimensional dynamical system into a finite dimensional space[Rob05]. In Chapter 4 we will employ the main result of Robinson introduced in Chapter 3 to construct a numerical approach to compute compact finite dimensional invariant sets of infinite dimensional dynamical systems. We will construct the core dynamical 6 Figure 1.1: Illustration of the approach in this thesis: by assuming that there exists a finite dimensional invariant set in function space(red), where, e.g. one state corresponds to one snapshot of a three-dimensional flow field described by the Navier-Stokes Equations(from Klus et al.,[KGPS16]), we will use infinite dimensional embedding theorems to embed this invariant set in a finite dimensional space(blue), i.e. the observation space. A state in function space mapped under the observation map corresponds to a point in observation space. Then we use the core dynamical system in the observation space in order to approximate embedded invariant sets and embedded invariant measures via set-oriented techniques. system on the finite dimensional observation space using a generalization of Tietze’s extension theorem[DS88] which is due to Dugundji[Dug51]. By this construction, the dynamics of the CDS on the embedded invariant set is topologically conjugate to that of the underlying infinite dimensional dynamical system on its invariant set [DHZ16](cf. Figure 1.1 for an illustration of our approach). In Chapter 5 set-oriented techniques for the approximation of embedded invariant sets are developed. First, we will extend the subdivision technique introduced in Chapter 2.2 to continuous but not necessarily homeomorphic dynamical systems. Then we will show how to approximate embedded attractors via the subdivision 7 1 Introduction method[DHZ16]. Finally, we present an extension of the continuation method introduced in Chapter 2.3 that allows us to approximate embedded unstable manifolds [ZDG18]. The numerical approach introduced in Chapter 4 and Chapter 5 is applicable to arbitrary infinite dimensional dynamical systems described by a Lipschitz continuous operator on a Banach space. In Chapter 6, however, we will restrict our attention to DDEs with(small) state dependent time delay as well as PDEs. In the case of DDEs, we will propose one specific numerical realization of the CDS, where we choose the observation map to be the delay-coordinate map according to[Rob05]. Then we show how to numerically construct the inverse of this map. The inverse map takes a point from the observation space and constructs an initial function in the underlying function space[DHZ16]. In fact, the particular realization of this inverse map strongly depends on the observations we use. As a consequence, if the numerical realization is difficult to achieve then this is an indicator that we have to choose other observations. The numerical construction of the CDS will allow us to use the set-oriented methods developed in Chapter 5 to approximate invariant sets and invariant measures for several DDEs. In the second part of Chapter 6 we show a numerical realization of the CDS for PDEs[ZDG18]. Following[HK99], we use a linear observation map, which projects a function from function space onto an orthonormal basis computed with the Proper Orthogonal Decomposition (POD)[Sir87]. Here, we assume that each state variable can be represented by a projection onto a set of POD-basis functions. The POD-coefficients are then our observables which, when multiplied by their corresponding basis functions and the resulting linear combination of basis functions are summed together, will reproduce an approximation of the original state. We conclude this chapter by presenting several unstable manifolds of the one-dimensional Kuramoto-Sivashinsky equation in different regimes. The CDS allows us to use set-oriented techniques in a finite dimensional space even when the dynamical system is infinite dimensional. However, due to the construction of the CDS each evaluation also includes evaluations of the underlying infinite dimensional dynamical system, e.g. the time- T-map of a PDE. The crucial steps in the subdivision and continuation algorithms are the selection and continuation step, respectively. In both steps, we have to check if the image of a box under the CDS has a non-empty intersection with another box in the box collection. Numerically, this is realized as follows: within each box we choose a possibly large number of test points, map each test point under the CDS and check if another box is hit. Thus, the evaluation of one selection or continuation step may be very expensive. Therefore, in comparison to the classical setting, i.e. in case of a finite dimensional dynamical system, the numerical effort not only depends on the dimension of the underlying space and the dimension of the attractor, but also on the efficient evaluation of the selection and continuation step, respectively. To this end, modifications of both 8 steps are presented in Chapter 7. In the first part of Chapter 7 we will develop a modified selection step, where the number of function evaluations of the CDS is decreased by a factor of approximately two. By storing information from previous selection steps of the subdivision algorithm and using a slightly modified selection step, this strategy decreases the overall computational time by a factor of approximately four. In the second part of Chapter 7 we develop a sequential procedure for the subdivision method which adaptively increases the embedding dimension if it has been chosen too low initially, without starting the algorithm anew. This procedure is particularly useful for dynamical systems for which a priori estimates of the upper box-counting dimension and the thickness exponent are not known. Finally, we present a Koopman operator based continuation method in which the evaluation of the CDS is partially replaced by local ROMs based on the Koopman operator[ZPD18]. We start by introducing the Koopman operator and its numerical approximation via EDMD. Then we will show how to apply the Koopman operator to the continuation step in order to compute approximations of embedded unstable manifolds more efficiently. We conclude each part of Chapter 7 with at least one example. Parts of this thesis grew out of publications to which the author has made substantial contributions. They are referenced at the beginning of the respective chapters. 9 2 Classical set-oriented techniques The purpose of this chapter is to introduce and review the mathematical concepts of set-oriented numerical methods. Over the last two decades they have been developed in the context of the numerical treatment of dynamical systems (e.g.[DH97, DJ99, FD03, FLS10]). The basic idea is to cover the objects of interest like invariant sets or invariant measures by outer approximations which are created via subdivision techniques. The set-oriented techniques have been used in several different application areas such as molecular dynamics([DDJS99, SHD01, DGM ` 05]), astrodynamics([DJL ` 05, DJK ` 05, DJ06]) or ocean dynamics([FHR ` 12]). In Section 2.1 we first start with some of the basic notions on invariant sets of dynamical systems(e.g.[ER85]). The set-oriented techniques developed in[DH97] and[DH96] will be reviewed in Section 2.2 and 2.3, respectively. Finally, in Section 2.4 we will briefly summarize the results of[DJ99]. The aim of this section is to determine a statistical description of the dynamical behavior. This information is encoded in the underlying invariant measure and we will use a transfer operator approach in order to approximate invariant measures on box coverings obtained by set-oriented methods. 2.1 Theoretical background One central aspect in the theory of dynamical systems is to study long term behavior of the system’s states. To this end, the so-called global attractor is of interest, that is, an invariant set that attracts all the trajectories of the underlying dynamical system. Following the contents of[DH97], in this section we will present some basic notions on invariant sets of dynamical systems. In what follows, we consider discrete autonomous dynamical systems of the form x j ` 1 “ ϕ p x j q , j “ 0, 1,...,(2.1) where ϕ: R n Ñ R n is a homeomorphism. Such systems arise, for instance, if one considers the time- T-map of an ordinary differential equation. A subset A Ă R n is called ϕ-invariant if ϕ p A q“ A.(2.2) 11 2 Classical set-oriented techniques We call an invariant set A an attracting set with fundamental neighborhood U if for every open set V Ą A there exists a N P N such that ϕ j p U q Ă V, @ j ě N. Observe that if A is invariant then the closure of A is invariant, too(e.g.[Tes12]). Thus, we restrict our attention to closed invariant sets A, i.e. A “ č ϕ j p U q . j ě 0 By definition all the points in the fundamental neighborhood U are attracted by A. Therefore, we call the open set Ť j ě 0 ϕ ´ j p U q the basin of attraction of A. If the basin of attraction of A is equal to the whole of R n then A is called the global attractor. Remark 2.1.1. 1. Observe that, in general, the global attractor may not be compact. Nonetheless, in applications it can frequently be observed that all the orbits of the underlying dynamical system eventually lie inside a bounded domain in R n , and thus, the compactness of A immediately follows. 2. The global attractor contains all the invariant sets of the dynamical system. Moreover, if the global attractor is compact, then it also contains all invariant compact sets. In applications of the algorithms developed in[DH97, DH96] we approximate just a part of the global attractor within a specified compact set Q Ă R n . To this end, for Q Ă R n we define the global attractor relative to Q as follows: Definition 2.1.2. Let Q Ă R n be a compact set and assume that the dynamical system(2.1) possesses an invariant set A, i.e. ϕ p A q“ A. Then the global attractor relative to Q is defined as A Q “ č ϕ j p Q q . j ě 0 (2.3) Remark 2.1.3. 1. The definition of A Q in(2.3) implies that A Q Ă Q. Moreover, ϕ ´ 1 p A Q q“ č ϕ j ´ 1 p Q q j ě 0 ˜¸ “ ϕ ´ 1 p Q q X č ϕ j p Q q j ě 0 “ ϕ ´ 1 p Q q X A Q Ă A Q , 12 2.2 The subdivision algorithm but not necessarily ϕ p A Q q Ă A Q . In particular, A Q may not be invariant. Furthermore, since Q is compact and ϕ a homeomorphism A Q is compact as wel l. 2. Let A be the global attractor of ϕ. Then the global attractor relative to Q is a subset of A. Moreover, observe that in general A Q ‰ A X Q. Let us consider, for instance, a heteroclinic connection between two hyperbolic fixed points p and q. Furthermore, suppose that the unstable manifold of p is the stable manifold of q. Next, we consider a compact set Q which contains q but not the entire heteroclinic connection to p. On one hand, by definition the global attractor A contains the heteroclinic connection between p and q, but, on the other hand, A Q does not contain the part of the heteroclinic connection between p and q. Hence, A Q ‰ A X Q. 2.2 The subdivision algorithm In this section we describe the subdivision algorithm and how it can be used to approximate the relative global attractor A Q [DH97]. The idea of the subdivision algorithm is as follows: we start with a finite family of(large) compact subsets of R n which cover the domain in which we want to analyze the dynamical behavior. Numerically, we will realize this finite family of compact subsets by n-dimensional cubes, throughout this thesis termed as boxes defined by B p c, r q“ t y P R n : | y i ´ c i | ď r i for i “ 1,..., n u , where c, r P R n , r i ą 0 for i “ 1,..., n, are the center and radii, respectively. Then we subdivide each of these boxes into smaller ones, e.g. by bisection with respect to the j-th coordinate, where j is varied cyclically, and keep only those boxes that contain parts of the relative global attractor. Continuing this process with the new collection of(smaller) sets generates successively better approximations of the relative global attractor. This process terminates when a predefined lower bound of the box-radius is reached. Let us be more precise. By using the subdivision algorithm we obtain a sequence B 0 , B 1 ,... of finite collections of compact subsets of R n such that the diameter diam p B q“ max diam p B q B P B 13 2 Classical set-oriented techniques converges to zero for Ñ 8 . Given an initial collection B 0 , we inductively obtain B from B ´ 1 for “ 1, 2,... in two steps. 1. Subdivision step: construct a new collection B ˆ such that ďď B “ B B P B ˆ B P B ´ 1 (2.4) and diam p B ˆ q“ θ diam p B ´ 1 q , where 0 ă θ min ď θ ď θ max ă 1, e.g. θ “ θ “ 1 { 2. 2. Selection step: define the new collection B by (2.5) B “ ! B P B ˆ : D B ˆ P B ˆ such that ϕ ´ 1 p B q X B ˆ ‰ ) H .(2.6) The subdivision step guarantees that the collections B consist of successively finer sets for increasing. In fact, by construction diam p B q ď θ max diam p B 0 q Ñ 0 for Ñ 8 . By applying the selection step we remove each subset whose preimage does neither intersect itself nor any other subset in B ˆ . Therefore, this step is responsible for the fact that the unions Ť B P B B approach the relative global attractor. Remark 2.2.1. 1. Note that in the selection step(2.6), we have to decide whether or not the preimage of a given set B i P B ˆ has a nonzero intersection with another set B j P B ˆ , i.e. ϕ ´ 1 p B i q X B j “ H .(2.7) Numerically, this is realized as follows: we discretize each box B j by a finite set of test points x P B j and replace the condition(2.7) by ϕ p x q R B i for all test points x P B j .(2.8) A rigorous discretization of each box B j that reduces the numerical effort for the evaluation of(2.8) has been introduced in[Jun00]. However, in this work local Lipschitz constants for the map ϕ have to be known. 2. In practice the test points x P B j can be chosen according to several different strategies: in low-dimensional problems one can choose them from a regular 14 2.2 The subdivision algorithm grid within each box B j . Alternatively one can select the test points from the boundaries of the boxes or at random with respect to a uniform distribution. Throughout this thesis, we use the Halton sequence which is a quasi-random number sequence[Hal64]. In what follows, we will show that this algorithm always converges to a relative global attractor if is going to infinity. To this end, let us denote by Q the collection of compact subsets obtained after subdivision steps, that is, ď Q “ B. B P B Moreover, we denote the initial covering of the set Q by Q 0 , i.e. Q 0 “ Ť B P B 0 B “ Q, where B 0 is a finite collection of closed subsets. We begin by noticing that the relative global attractor is always covered by the sets Q. Lemma 2.2.2. Let A Q be a global attractor relative to the compact set Q, and let B 0 be a finite col lection of closed subsets whose union is Q, i.e. Q 0 “ Ť B P B 0 B “ Q. Then the sets Q obtained by the subdivision algorithm contain the relative global attractor, i.e. A Q Ă Q for all P N . Proof. By Definition 2.1.2 we know that A Q Ă Q “ Q 0 . Suppose there exists a x P A Q Ă Q ´ 1 such that x R Q for some ą 0. Then there is a box B P B ˆ containing x which is removed in the selection step, i.e. ϕ ´ 1 p B q X Q ´ 1 “ H . Hence, ϕ ´ 1 p x q R Q ´ 1 . But this contradicts the fact that ϕ ´ 1 p A Q q Ă A Q Ă Q ´ 1 (cf. item 1 of Remark 2.1.3). In the next step, we show that every backward invariant subset of Q must be contained in the relative global attractor. Lemma 2.2.3. Let B Ă Q be a subset such that ϕ ´ 1 p B q Ă B. Then B is contained in the relative global attractor A Q . (2.9) 15 2 Classical set-oriented techniques Proof. First we show that B is contained in the global attractor A. By(2.9) we know that B Ă ϕ j p B q for all j ě 0. Therefore, B Ă č ϕ j p B q Ă č ϕ j p U q“ A, j ě 0 j ě 0 where A is the global attractor and U its fundamental neighborhood. Furthermore, from(2.9) it follows that ϕ j p B q Ă ϕ j ` 1 p B q for all j ě 0. Hence B “ č ϕ j p B q Ă č ϕ j p Q q“ A Q . j ě 0 j ě 0 Due to the construction of the subdivision method the collections of compact subsets denoted by Q define a nested sequence of compact sets, that is, Q ` 1 Ă Q. Therefore, for each m, m č Q m “ Q,(2.10) “ 1 and we may view 8 č Q 8 “ Q “ 1 (2.11) as the limit of the Q’s. We will show that the set Q 8 is backward invariant and hence must be contained in A Q by Lemma 2.2.3. Lemma 2.2.4. The set Q 8 is a nonempty backward invariant set, i.e. ϕ ´ 1 p Q 8 q Ă Q 8 . Proof. By Lemma 2.2.2 the relative global attractor A Q must be contained in the limit set Q 8 . This fact yields, in particular, that Q 8 is nonempty. Hence, it remains to show that Q 8 is backward invariant. To this end, we suppose that there exists a point y P Q 8 such that ϕ ´ 1 p y q R Q 8 . Since Q 8 is compact this implies that d ` ϕ ´ 1 p y q , Q 8 ˘ ą δ ą 0, where d denotes the distance between ϕ ´ 1 p y q and Q 8 . Hence there is an integer N ą 0 such that d ` ϕ ´ 1 p y q , Q j ˘ ą δ 2 for all j ą N. For each j we choose a set B j p y q P B j containing y. Then by continuity there exists a k ą N such that ϕ ´ 1 p B k p y qq X Q k “ H . 16 2.2 The subdivision algorithm However, this is impossible by the construction of the algorithm(cf.(2.6)), and we have obtained a contradiction. Thus, Q 8 is backward invariant. Now we are in a position to prove the following convergence result. Proposition 2.2.5. Let A Q be a global attractor to the closed set Q, and let B 0 be a finite col lection of closed subsets with Q 0 “ Ť B P B 0 B “ Q. Then A Q “ Q 8 . Proof. On the one hand, by Lemma 2.2.2 A Q is contained in each Q. Consequently, it is also contained in Q 8 . On the other hand, by Lemma 2.2.4 Q 8 is backward invariant and thus Lemma 2.2.3 implies that it must be contained in the relative global attractor A Q . Thus, Q 8 Ă A Q Ă Q 8 which yields the desired result. Remark 2.2.6. Observe that we can reformulate the main convergence result of [DH97] to lim h p A Q , Q q“ 0, Ñ8 (2.12) where h p B, C q is the Hausdorff distance between two compact subsets B, C Ă R n . We summarize the subdivision method in Algorithm 2.1 and conclude this section with an example. Example 2.2.7. Let us consider a simple chaotic flow[Spr94] x 9 1 “ ´ x 3 , x 9 2 “ x 1 ´ x 2 , x 9 3 “ Ax 1 ` x 22 ` Bx 3 , (2.13) where A “ 3. 1 and B “ 0. 5 are given parameters. For this parameter regime the system possesses the(hyperbolic) equilibria O 1 “ p 0, 0, 0 q and O 2 “ p´ 3. 1, ´ 3. 1, 0 q , each of them having a one-dimensional stable manifold and a two-dimensional unstable manifold. The map ϕ: R 3 Ñ R 3 is defined by the time- T-map of(2.13) with T “ 15. In Figure 2.1(a)-(d) we show successively finer box coverings of A Q for Q “ r´ 8, 8 s ˆ r´ 8, 8 s ˆ r´ 6, 10 s . The numerical effort of the subdivision algorithm depends on a combination of both the dimension of the underlying space and the dimension of the global attractor. Since the number of boxes increases exponentially, we are restricted to dynamical 17 2 Classical set-oriented techniques Algorithm 2.1 Subdivision algorithm Initialization: Choose an initial box Q Ă R n and start the subdivision algorithm with B 0 “ t Q u . 1. Subdivision step: construct a new collection B ˆ such that ďď B “ B B P B ˆ B P B ´ 1 and diam p B ˆ q“ θ diam p B ´ 1 q , where 0 ă θ min ď θ ď θ max ă 1. 2. Selection step: define the new collection B by B “ ! B P B ˆ : D B ˆ P B ˆ such that ϕ ´ 1 p B q X B ˆ ‰ ) H . 3. Repeat steps(1) and(2) until a prescribed size ε of the diameter relative to the initial box Q is reached. That is, stop when diam p B q ă ε diam p Q q . (a) “ 9 (b) “ 15 (c) “ 21(d) “ 27 Figure 2.1: (a)-(c) Successively finer coverings of the relative global attractor A Q of (2.13) for “ 9, 15, 21.(d) Transparent boxes depicting the internal structure of A Q after “ 27 subdivision steps. systems that posses low-dimensional attractors. If we are only interested in parts of the attractor, e.g. the unstable manifold of a given hyperbolic fixed point, it is more efficient, in the sense of computational time, to compute this unstable manifold via a (set-oriented) continuation method. To this end, we will review the classical continuation method introduced in[DH96] in the following section. 18 2.3 The continuation method 2.3 The continuation method Based on[DH96], we will describe how to combine the subdivision algorithm for the computation of relative global attractors with a set-oriented continuation technique. This will allow us to approximate unstable manifolds of the discrete dynamical system(2.1). But first, we note that the subdivision algorithm described in Section 2.2 can also be used to approximate global unstable manifolds. This claim is verified by showing that unstable manifolds are always contained in the global attractor A if it is compact. To this end, we begin with the following definition (cf. e.g.[ER85, PDM12]). Definition 2.3.1. Let ϕ: R n Ñ R n be a diffeomorphism and p a hyperbolic fixed point of ϕ. Then the unstable manifold of p is defined by W u p p q“ x P R n : ϕ ´ j p x q Ñ p for j Ñ ( 8 .(2.14) Moreover, the local part of W u p p q , which contains the fixed point p, is called the local unstable manifold of p. It is defined as follows: W luoc p p q“ x P R n : } ϕ ´ j p x q ´ p } ă ε for al l j P ( N .(2.15) The following theorem holds(cf.[ER85, DH97]): Theorem 2.3.2. Let A be a compact attracting set of ϕ, and p P A a hyperbolic fixed point, then W u p p q Ă A, i.e., the unstable manifold of p is contained in A. Proof. Let p P A be a hyperbolic fixed point and suppose that y P W u p p q . Let V be an open neighborhood of A and U the fundamental neighborhood of A. Since A is a compact attracting set there exists a k P N such that ϕ ´ j p y q P U for all j ě k. On the other hand there is a l P N such that ϕ j p U q Ă V for all j ě l. Hence by taking j ě max p k, l q we obtain y “ ϕ j p ϕ ´ j p y qq P V. Since V was arbitrary this completes the proof. As a consequence, by applying the subdivision algorithm to a(small) neighborhood of a hyperbolic fixed point we can compute a covering of the corresponding local unstable manifold up to a given accuracy. This will be the initialization step of the continuation method discussed in this section which was introduced in[DH96]. Then we proceed with the continuation step where we extend the covering of the local 19 2 Classical set-oriented techniques unstable manifold successively to coverings of larger parts of the(global) unstable manifold. More precisely, we define a partition P of Q to be a finite family of subsets of Q such that ď B “ Q and int B X int B 1 “ H , @ B, B 1 P P, B ‰ B 1 . B P P Moreover, we denote by P p x q P P the element of P containing x P Q. We consider a nested sequence P s , s P N , of successively finer partitions of Q, requiring that for all B P P s there exist B 1 ,..., B m P P s ` 1 such that B “ Ť m i “ 1 B i and diam p B i q ď θ diam p B q for some 0 ă θ ă 1. A set B P P s is said to be of level s. Now assume that C B “ P s p p q is a neighborhood of the fixed point p such that the attractor relative to C B satisfies A C B “ W luoc p p q X C B for some ą 0(cf.(2.15)). Applying the subdivision algorithm with subdivision steps to B 0 “ t C B u , we obtain a covering B Ă P s ` of the embedded local unstable manifold W luoc p p q , that is, A C B “ W luoc p p q X C B Ă ď B. B P B (2.16) Here we assume that the subdivision algorithm used on B 0 “ t C B u constructs box coverings that are elements of the partitions P n , n ą s. By Proposition 2.2.5 this box covering converges to W luoc p p q for Ñ 8 . Now the continuation algorithm can be described as follows. For fixed P N we define a sequence C 0 p q , C 1 p q ,... of subsets C j p q Ă P s ` by 1. Initialization step: C 0 p q “ B, where C 0 p q is a box covering of the local unstable manifold A C B “ W luoc p p q X C B obtained by the subdivision method(cf. Algorithm 2.1). 2. Continuation step: for j “ 0, 1, 2,... define !) C j p`q 1 “ B P P s ` : D B 1 P C j p q such that B X ϕ p B 1 q ‰ H .(2.17) Remark 2.3.3. 1. Observe that the unions C j p q “ ď B B P C j p q 20 2.3 The continuation method form a nested sequence in, i.e. C j p 0 q Ą C j p 1 q Ą ... Ą C j p q .... In fact, it is also a nested sequence in j, i.e. C 0 p q Ă C 1 p q ... Ă C j p q .... 2. In the initialization step we will often choose “ 0. In most applications this is sufficient in order to get a good approximation of the unstable manifold. 3. Similar to the numerical realization of the subdivision step(cf. item 1 of Remark 2.2.1) we will replace the condition B X ϕ p B 1 q ‰ H in the continuation step(2.17) by ϕ p x q P B for at least one test point x P B 1 . It is easy to see that the algorithm, as constructed, generates an approximation of the embedded unstable manifold W u p p q . In particular, we expect that the bigger s and are chosen the better the approximation of W u p p q will be. In Figure 2.2 we show some steps of the continuation method described above. However, since we restrict our attention to a compact subset Q Ă R n it can just be guaranteed that the algorithm generates an approximation of a certain part of W u p p q . In what follows, we will define the subsets of W u p p q which are indeed approximated by the continuation method. To this end, we set W 0 “ W luoc p p q X C B and define inductively for j “ 0, 1, 2,... W j ` 1 “ ϕ p W j q X Q. With this notion we obtain the following convergence result[DH96]. Proposition 2.3.4. 1. The sets C j p q are coverings of W j for all j, “ 0, 1,.... 2. For fixed j, C j p q converges to W j in Hausdorff distance if the number subdivision steps in the initialization step goes to infinity. of Proof. The first statement follows directly from(2.16), i.e. the set B obtained by the subdivision algorithm is always a covering of W 0 “ W luoc p p q X C B . In order to prove the second statement, we first observe that by Proposition 2.2.5 C 0 p q converges to the relative global attractor A C B “ W luoc p p q X C B “ W 0 for Ñ 8 . 21 2 Classical set-oriented techniques (a)(b)(c) (d)(e)(f) Figure 2.2: Illustration of the continuation method for “ 0.(a) Initial box Q Ą A k . (b) Let C B P P s be the box containing p.(c) Within C B choose a finite number of test points x P R n . Mark those boxes that are hit by ϕ p x q .(d)(f) Repeat step(c) with those boxes that were marked in the last step until no additional boxes were marked. Furthermore, since j is fixed a continuity argument shows that the sets C j p q converge to W j for Ñ 8 , i.e. C j p8q “ č C j p q “ W j . ě 0 Remark 2.3.5. 1. Observe that in general the continuation method will not lead to an approximation of the entire set W u p p q X Q. If Q is not sufficiently big, the unstable manifold of the hyperbolic fixed point p may’leave’ Q but may as well’wind back’ into it. In this case we will not cover all of W u p p q X Q(cf. Figure 2.3). To get an approximation of the entire set, Q has to be chosen sufficiently large, i.e. W u p p q Ă Q. 2. Due to the realization of our set-oriented continuation method it may happen that we also obtain a covering of the unstable manifold of another hyperbolic fixed point q P Q. If this is the case, the continuation method will proceed 22 2.3 The continuation method along these unstable directions of the fixed point q as well. (a)(b) Figure 2.3: Illustration of the problems discussed in item 1 and item 2 of Remark 2.3.5. (a) The dashed line will not be covered by the continuation method and thus, we will not approximate the entire set W u p p q X Q.(b) Schematic box covering obtained by the continuation method, where in this particular case we also obtain a covering of W u p q q X Q. We summarize the continuation method discussed in this section in the following algorithm. Algorithm 2.2 The continuation method Initialization: Choose an initial box Q Ă R n and a partition P s of Q. Mark the box C B P P s with p P C B . 1. Apply the subdivision algorithm(cf. Algorithm 2.1) with subdivision steps to B 0 “ t C B u to obtain a covering B Ă P s ` of the embedded local unstable manifold. 2. Set C 0 p q “ B. 3. Continuation step: for j “ 0, 1, 2,... define !) C j p`q 1 “ B P P s ` : D B 1 P C j p q such that B X ϕ p B 1 q ‰ H . 23 2 Classical set-oriented techniques 2.4 Computation of invariant measures In this section, we will review the contents of[DJ99]. We are interested in the long term behavior of the dynamical system x i ` 1 “ ϕ p x i q , i “ 0, 1,..., where ϕ: R n Ñ R n is a continuous map(and not a diffeomorphism as in[DJ99]), and if this system exhibits complicated dynamics. To this end, we will determine a statistical description of the dynamical behavior. This information is encoded in the underlying invariant measure and we will use a transfer operator approach in order to approximate invariant measures on the relative global attractor A Q . The transfer operator, also called the Perron-Frobenius operator, is a classical mathematical tool for the numerical analysis of complicated dynamical behavior, e.g.[DJ99, SHD01, FP09, Kol10] and it was recently also used for the analysis of dynamical systems with uncertain parameters[DKZ17] or in the context of stochastic differential equations [FK17]. In Section 2.4.1 we will first introduce the reader to the notion of stochastic transition functions that will allow us to define invariant measures in the stochastic context. Then, in Section 2.4.2 and 2.4.3 we will introduce the transfer operator and show how to numerically approximate invariant measures. Finally, in Section 2.4.4 we show a convergence result and conclude with an example, where we show the invariant measure of the Lorenz system. 2.4.1 Stochastic transition functions and probability measures In this section, we will follow closely the related contents in[DJ99, Kol10]. Let Q Ă R n be compact and denote by B the Borel- σ algebra on Q and by m the Lebesgue measure on B. Definition 2.4.1. Let M be the space of probability measures on B. A measure µ P M is called invariant w.r.t. ϕ if µ p B q“ µ p ϕ ´ 1 p B qq for all B P B. We call a probability measure µ ergodic(w.r.t. ϕ), if for all invariant sets A(cf.(2.2)) µ p A q“ 0 or µ p A q“ 1 is satisfied. Ergodic measures play an important role in the long-term behavior of the system: Theorem 2.4.2([Bir31]). Let ϕ: Q Ñ Q be a measurable function on the measurable space p Q, B, µ q and µ an ergodic measure. Then, for any φ P L 1 p Q, B, µ q , the 24 2.4 Computation of invariant measures average of the observable φ along an orbit of ϕ is equal almost everywhere to the average of φ w.r.t. µ, i.e. lim n Ñ8 1 n n ÿ φ p ϕ k p x qq k “ 0 “ ż Q φ d µ µ ´ a.e. Example 2.4.3. Let x P Q and B P B. We would like to obtain the relative frequency of an orbit t ϕ k p x qu 8 k “ 0 visiting B. To this end, we have ϕ k p x q P B ðñ χ B p ϕ k p x qq“ 1. Thus, we obtain the relative frequency of points of the orbit t x, ϕ p x q ,..., ϕ N ´ 1 p x qu that visit B by 1 N N ´ 1 ÿ χ B p ϕ k p x qq . k “ 0 For µ an ergodic measure we finally get lim N Ñ8 1 N N ÿ χ B p ϕ k p x qq k “ 0 “ ż χ B Q d µ “ µ p B q µ ´ a.e., i.e. the asymptotic relative frequency is given by µ p B q . In the remainder of this section we turn our attention to the more general stochastic framework. Definition 2.4.4. A function p: Q ˆ B Ñ r 0, 1 s is a stochastic transition function, if 1. p p x, ¨q is a probability measure for every x P Q, 2. p p¨ , A q is Lebesgue-measurable for every A P B. Intuitively this means that if we are in a state x, the probability of being in the set A in the next instance is given by the stochastic transition function p p x, A q . Moreover, by setting p p 1 q p x, A q“ p p x, A q , the i-step stochastic transition function for i “ 1, 2,... is defined by ż p p i ` 1 q p x, A q“ p p i q p y, A q p p x, d y q , i “ 1, 2,.... Q (2.18) The definition of a stochastic transition function allows us to define invariant measures in the stochastic setting. 25 2 Classical set-oriented techniques Definition 2.4.5. Let p be a stochastic transition function. If µ P M satisfies ż µ p A q“ p p x, A q d µ p x q Q for all A P B, then µ is an invariant measure of p. Remark 2.4.6. 1. If µ is an invariant measure of p then it follows that ż µ p A q“ p p i q p x, A q d µ p x q Q for all i “ 1, 2,.... 2. Let us denote by δ y the Dirac measure supported on the point y P Q. Then p p x, A q“ δ h p x q p A q is a stochastic transition function for every m-measurable function h. In particular, by choosing h “ ϕ we get the deterministic situation in this stochastic setup. More precisely, let us suppose that p p x, ¨q“ δ ϕ p x q and let µ be an invariant measure of p. Then, for A P B, we get żż µ p A q“ p p x, A q d µ p x q“ δ ϕ p x q p A q d µ p x q ż “ χ A p ϕ p x qq d µ p x q“ µ p ϕ ´ 1 p A qq , where we denote by χ A the characteristic function of A. Hence, µ is an invariant measure for the map ϕ in the classical deterministic sense(cf.[Pol93]). In what follows we assume that for every x P Q the probability measure p p x, ¨q is absolutely continuous with respect to the Lebesgue measure m. Thus, we may write p p x, ¨q as ż p p x, A q“ k p x, y q d m p y q for all A P B, A with an appropriate transition density function k: Q ˆ Q Ñ R . Obviously, k p x, ¨q P L 1 p Q, m q and k p x, y q ě 0.(2.19) In this case, we also call the stochastic transition function p absolutely continuous. Remark 2.4.7. Observe that ż k p x, y q d m p y q“ p p x, Q q“ 1 for all x P Q.(2.20) 26 2.4 Computation of invariant measures Similar to(2.18), we set k p 1 q p x, y q“ k p x, y q and define the i-step transition density function as ż k p i ` 1 q p x, y q“ k p x, ξ q k p i q p ξ, y q d m p ξ q , i “ 1, 2,....(2.21) For A P B this yields ż p p i q p x, A q“ k p i q p x, y q d m p y q , A i.e. the i-step transition density function k p i q is the stochastic transition density function for p i (cf.(2.18)). The following ergodic theorem for transition densities can be found in[Doo60]. Theorem 2.4.8. Let p: Q ˆ B Ñ r 0, 1 s be an absolutely continuous stochastic transition function with density function k: Q ˆ Q Ñ R . Suppose that k p x, y q ď M for M ą 0 and all x, y P Q. Then Q can be decomposed into finitely many disjoint invariant sets B 1 , B 2 ,..., B l , also called the ergodic sets of p, and a transient set F “ Q z Ť l j “ 1 B j such that for each B j there is a unique probability measure µ j P M with µ j p B j q“ 1 and N lim ÿ p p i q p x, A q“ µ j p A q for all A P B and @ x P B j . N Ñ8 i “ 1 (2.22) Furthermore, the left hand side q p x, A q in(2.22) exists uniformly in x and defines for every fixed x P Q an invariant measure. Finally, every invariant measure of p is a convex combination of the µ j ’s. 2.4.2 The transfer operator By introducing the stochastic setting in the section before, we are now in a position to describe the transfer operator in the stochastic context and, in particular, how to approximate it in order to use it numerically. To this end, we start with an introduction of this operator and review certain spectral properties. In what follows, the related contents can also be found in[DJ99]. Definition 2.4.9. Let p be a stochastic transition function. Then the transfer operator P: M C Ñ M C is defined by ż P µ p A q“ p p x, A q d µ p x q , 27 2 Classical set-oriented techniques where M C is the space of bounded complex-valued measures on B. Moreover, if p is absolutely continuous with density function k we can define P on L 1 , i.e. ż P g p y q“ k p x, y q g p x q d m p x q for all g P L 1 .(2.23) Remark 2.4.10. Note that in the case where p is absolutely continuous we have P: L 1 Ñ L 1 since for each g P L 1 ż żż P g p y q d m p y q“ k p x, y q g p x q d m p x q d m p y q żż “ g p x q k p x, y q d m p y q d m p x q ż “ g p x q d m p x q ă 8 (cf. Remark 2.4.7). A probability measure µ which does not change under the dynamics of the transfer operator P is called an invariant measure. Thus, µ is a fixed point of P, i.e. P µ “ µ.(2.24) In other words, invariant measures correspond to eigenmeasures of P for the eigenvalue one. Remark 2.4.11. Observe that in the deterministic situation where p p x, ¨q“ δ ϕ p x q we obtain ż P µ p A q“ p p x, A q d µ p x q“ µ p ϕ ´ 1 p A qq (cf. item 2 of Remark 2.4.6 or[LM13]). In the remainder of this section we review a numerical method for the approximation of such measures[DJ99]. 2.4.3 Numerical approximation of invariant measures The subdivision or the continuation method yield a box covering of the invariant set of interest. In this section we will explain how to approximate an invariant measure on this invariant set numerically. The invariant measure µ is a fixed point of the transfer operator P: M C Ñ M C (cf.(2.24)). Thus, we will first compute a discretized transfer operator P N and solve the corresponding eigenvalue problem in order to get an approximation of the invariant measure. Throughout this section, we follow closely the contents of[DHJR97, DJ99, Kol10]. 28 2.4 Computation of invariant measures Let us denote by m the Lebesgue measure and by B “ t B 1 ,..., B N u a disjoint partition of Q. The probability measures on the sets B j Ă B are defined as follows: µ B j p A q“ 1 m p B j q ż χ B j A d m “ m p A X B j q , m p B j q j “ 1,..., N.(2.25) Remark 2.4.12. Although this approach works with a covering of the whole set Q Ă R n , we will restrict our attention to global attractors relative to the set Q or unstable manifolds which have been generated by Algorithm 2.1 or Algorithm 2.2, respectively. Therefore, in what follows B is the finite collection of compact subsets defined by the selection step of the subdivision algorithm(2.6), or the union of all boxes obtained by the continuation steps(2.17). Obviously, the number of boxes N in our covering B of the attractor is in general much smaller in comparison to a covering of the whole set Q of the same fineness. Hence, this results in a much smaller eigenvalue problem. The transfer operator P is acting on the probability measures(2.25) as follows ż ` P µ B j ˘ p A q“ 1 ż p p x, A q d µ B j “ m p B j q p p x, A q χ B j d m 1 ż “ m p B j q p p x, A q d m. B j This allows us to approximate P with the stochastic matrix P N “ p p ij q on the box covering B “ t B 1 ,..., B N u , where 1 ż p ij “ m p B j q p p x, B i q d m, B j for i, j “ 1,..., N.(2.26) In the next step we approximate the invariant measure corresponding to the stochastic transition function p by the stationary distribution of the Markov chain given by P N . Concretely, we approximate probability measures ν P M C by N ÿ ν « α l µ B l , l “ 1 where µ B j is defined according to(2.25). Let us denote by µ the invariant measure which we want to approximate. Since the invariant measure fulfills(2.24), we also require that ˜ N ¸ N ÿÿ P α j µ B j p B i q“ α j µ B j p B i q“ α i , j “ 1 j “ 1 i “ 1, 2,..., N. 29 2 Classical set-oriented techniques Here, the construction of the box covering yields µ B j p B i q“ δ ij , where we denote by δ ij the Kronecker-delta. That is, for an approximation of an invariant measure µ we have to compute the eigenvector α N P R N ě 0 for the eigenvalue one of the matrix P N , i.e. P N α N “ α N (see also(2.26)). Remark 2.4.13. 1. In the deterministic case, that is p p x, ¨q“ δ ϕ p x q , the transition probabilities are given by p ij “ m p ϕ ´ 1 p B i q X m p B j q B j q . 2. Numerically, for the computation of(2.26) we use a Monte Carlo approach as described in[Hun93]. More precisely, for each 1 ď j ď N, we select a finite set of test points x 1 ,..., x M P B j at random according to a uniform distribution. This yields 1 ż p ij “ m p B j q p p x, B i q d m B j 1 M ÿ « M χ B i p ϕ p x k qq . k “ 1 Thus, we only have to check whether or not the points ϕ p x k q , k “ 1,..., M, are contained in B i . 3. The box covering t B 1 ,..., B N u obtained with the subdivision scheme and the dynamics induced by the stochastic transition function p yield a directed graph as illustrated in Figure 2.4. The dynamics on this graph with the transition probabilities in(2.26) can be viewed as an approximation of the transfer operator P. We summarize the approximation of an invariant measure corresponding to the stochastic transition function p supported on A Q in Algorithm 2.3. 2.4.4 Convergence result Finally, in this section we review the theoretical framework from[DJ99] to show a convergence result for the numerical approach discussed above. In order to ap30 2.4 Computation of invariant measures Figure 2.4: Schematic illustration of the graph induced by the transition function p on the box covering. Left: box covering t B 1 ,..., B 8 u and mapping of points from box B i to box B j . Right: resulting directed graph with vertices t v 1 ,..., v 8 u and edges p v i , v j q . Algorithm 2.3 Computation of invariant measures 1. Approximate the relative global attractor A Q by Algorithm 2.1 or the unstable manifold by Algorithm 2.2 to obtain a box covering t B 1 ,..., B N u . 2. Use t B 1 ,..., B N u to compute the discretized transfer operator P N by(2.26). 3. Compute the eigenvector α N P R N corresponding to the eigenvalue 1 of P N to obtain an approximation of an invariant measure µ on A Q (cf.(2.24)). ply classical convergence theory for compact operators, we have to consider small random perturbations(cf.[Kif86]) of ϕ p x q so that the transfer operator becomes compact as an operator on L 2 . Recall that the purpose of this section is to approximate the transfer operator P of a deterministic dynamical system represented by a continuous map ϕ. Hence, the stochastic system that we consider should be a small perturbation of the underlying deterministic system. To this end, let B “ B 0 p 1 q be the open ball in R n of radius one. For ą 0 we define the perturbed transition density function 1 ˆ 1 ´ ¯˙ k p x, y q“ n m p B q χ B y ´ x, x, y P R n .(2.27) 31 2 Classical set-oriented techniques Now we can define the stochastic transition function p in this context by ż p p x, A q“ k p ϕ p x q , y q d m p y q . A Remark 2.4.14. Observe that ż p p x, A q“ k p ϕ p x q , y q d m p y q Ñ δ ϕ p x q p A q A for Ñ 0 uniformly in x in a weak ˚ –sense. Hence, we get the deterministic situation in this stochastic setup. Moreover, the Markov process defined by any initial probability measure µ and the stochastic transition function p is a small random perturbation of the deterministic system ϕ in the sense of[Kif86]. Due to the small random perturbation, the measure p p x, ¨q is absolutely continuous for ą 0, and the corresponding transfer operator P: L 1 Ñ L 1 can be written as ż p P g q p y q“ k p ϕ p x q , y q g p x q d m p x q for all g P L 1 .(2.28) In order to apply classical convergence theory for compact operators, we review the following proposition(cf.[Yos80]): Proposition 2.4.15. Let k p x, y q be a real- or complex-valued B-measurable function on a measure space p Q, B, m q such that żż | k p x, y q| 2 d m p x q d m p y q ă 8 . Then the integral operator P: L 2 Ñ L 2 defined by the kernel k p x, y q , i.e. ż p P g qp y q“ k p x, y q g p y q d m p x q , g P L 2 “ L 2 p Q, B, m q , is compact. For the proof, the interested reader is referred to[Yos80]. To show that the transfer operator is compact, we have to verify that for ą 0 ij | k p ϕ p x q , y q| 2 d m p x q d m p y q ă 8 .(2.29) 32 2.4 Computation of invariant measures By(2.27), it follows directly that ij | k p ϕ p x q , y q| 2 ˆ d m p x q d m p y q ď m p Q q ˙ 2 n m p B q ă 8 . Therefore, by Proposition 2.4.15 the transfer operator in(2.28) as an operator defined on L 2 , i.e. P: L 2 Ñ L 2 , is compact. Throughout the rest of this section we now suppose that(2.29) is fulfilled. Let V N , N ě 1, be a sequence of N-dimensional subspaces of L 2 and denote by Q N : L 2 Ñ V N the corresponding projection onto the subspace V N , such that Then lim } Q N ψ ´ ψ }“ 0 @ ψ P L 2 . N Ñ8 } P N ´ P } Ñ 0 for N Ñ 8 , where P N denotes the projection of P onto the N-dimensional subspaces of L 2 , i.e. P N “ Q N P. Denote by σ p P q and ρ p P q the spectrum and resolvent set of P, respectively, and by R z “ p zI ´ P q ´ 1 , z P ρ p P q , the resolvent operator. Now let β ‰ 0 P σ p P q be an eigenvalue of P and let E be a projection onto the corresponding generalized eigenspace. More precisely, let Γ Ă C be a circle in ρ p P q with center β such that no other point of σ p P q is inside Γ. Then E is defined by 1 ż E “ E p β q“ 2 πi R z p P q d z. Γ The following convergence result yields an approximation result for invariant measures in the randomized situation(see Theorem 3.5 in[DJ99] and also[Osb75]). Theorem 2.4.16. Let β d be an eigenvalue of P N such that β N Ñ β for N Ñ 8 , and let γ N be a corresponding eigenvector of unit length. Then there is a vector h N P R p E q and a constant C ą 0 such that p βI ´ P q h N “ 0 and } h N ´ γ N } 2 ď C }p P ´ P N q| R p E q } 2 . 33 2 Classical set-oriented techniques We conclude this section with an example. Example 2.4.17(The Lorenz System). We consider the famous Lorenz system from 1963[Lor63] defined by x 9 1 “ σ p x 1 ´ x 2 q , x 9 2 “ x 1 p ρ ´ x 3 q ´ x 2 , x 9 3 “ x 1 x 2 ´ βx 3 , (2.30) where we have slightly changed the parameters to σ “ 10, ρ “ 28 and β “ 0. 4. This system has been derived by truncating a Fourier series expansion of a convection fluid model. We first compute the unstable manifold W u p 0 q of the origin by the continuation method discussed in Section 2.3, whose closure is the Lorenz attractor. This is the first step of Algorithm 2.3 and we denote the box-collection by Q Lor . Note that we can also use the subdivision method discussed in Section 2.2 in order to obtain a box covering Q Lor . Let us denote by N P N the number of boxes in our final box covering. Next, we use(2.26)(see also item 2 of Remark 2.4.13) in order to approximate the transfer operator P N P R N ˆ N on Q Lor . Finally, we compute the eigenvector corresponding to the eigenvalue one of P N to obtain the invariant measure µ for the Lorenz attractor. The invariant measure is shown in Figure 2.5, where the density ranges from blue(low density), over green and yellow to red(high density). The applicability of the subdivision and the continuation method discussed in Section 2.2 and Section 2.3, respectively, is restricted to finite dimensional dynamical systems, e.g. ordinary differential equations. In the following chapters, we extend these methods to the infinite dimensional context. More precisely, we develop a set-oriented numerical methodology which allows us to compute finite dimensional invariant sets for infinite dimensional dynamical systems. In order to analyze these systems, rather than using a straightforward approach based on an appropriate combination of Galerkin expansions and subdivision steps we follow a completely different path and utilize infinite dimensional embedding results in our numerical treatment. In the next chapter we will give a detailed overview about finite dimensional as well as infinite dimensional embedding results. In particular, the infinite dimensional embedding result by Robinson[Rob05] will allow us to compute embedded invariant sets in a finite dimensional space which we call observation space. Thus, we construct a continuous and finite dimensional dynamical system, the socalled core dynamical system(CDS), which will be the main part of Chapter 4. Since the CDS is continuous, and therefore measurable, we will also be able to compute invariant measures on the embedded invariant sets, called embedded invariant measures. 34 2.4 Computation of invariant measures Figure 2.5: Invariant measure for the Lorenz attractor obtained by Algorithm 2.3. The density ranges from blue(low density) Ñ green Ñ yellow Ñ red(high density). 35 3 From finite to infinite dimensional embeddings In this chapter we give an overview about finite dimensional as well as infinite dimensional embedding results introduced in the last years. The general idea of embeddings is to reconstruct attractors or the qualitative dynamics from(experimental) data, e.g. obtained by sensor measurements[BK86]. In particular, embedding techniques are well suited if a mathematical description of the system is not known. The aim of this chapter is to present an embedding result by Robinson [Rob05] that allows us to get a one-to-one image of an invariant set A of an infinite dimensional dynamical system in a finite dimensional space, in what follows called the observation space. This will be achieved by using the so-called observation map. In Chapter 4 we will use this particular map to construct a finite dimensional dynamical system that will allow us to approximate finite dimensional invariant sets of infinite dimensional dynamical systems. 3.1 Taken’s embedding theorem The first result on embeddings in the dynamical systems context is Taken’s embedding theorem[Tak81] based on Whitney’s embedding theorem[Whi36]. Whitney’s embedding theorem states that a generic map from a d-dimensional manifold to a p 2 d ` 1 q -dimensional Euclidean space is an embedding. This means, in particular, that no two points in the d-dimensional manifold map to the same point in the p 2 d ` 1 q -dimensional space. In the context of Whitney’s embedding theorem the 2 d ` 1 independent measurements(observations) can be considered as a map. Takens has shown that a compact manifold of dimension d of a finite dimensional dynamical system can generically be embedded using a delay-coordinate map, which consists of observations of the dynamical behavior at an appropriate number(at least 2 d ` 1) of consecutive snapshots in time. The main difference to Whitney’s embedding theorem is that we only need time-delayed versions of one generic observation in order to embed the d-dimensional manifold. Taken’s embedding theorem has, e.g. been applied to predict chaotic time series as well as chaotic time series within neural networks[FS87, Cas89, PRK92]. For the latter, in order to model the dynamics of the system that produced the signal, the first step is to reconstruct the attractor of the system by using Taken’s embedding theorem and then train an 37 3 From finite to infinite dimensional embeddings artificial neural network to predict time series over long time periods[PRK92]. Another area of application is, for instance, the estimation of entropy for the detection of epilepsy in EEG data[KCAS05]. Before we state the main theorem of[Tak81], we need the following definitions(see e.g.[Wig03]): Definition 3.1.1(Residual set). Let X be a topological space, and let U Ă X. U is called a residual set if it contains the intersection of a countable number of sets, each of which are open and dense in X. Definition 3.1.2( C k -topology). Let C r p R d , R d q denote the space of C r maps of R d into R d . Moreover, two elements f, g P C r p R d , R d q are said to be C k ε-close( k ď r), or just C k -close, if } f p q ´ g p q } ă ε @ 0 ď ď k, where } ¨} denotes some norm in C r . Assume that M is a compact, boundaryless differentiable manifold of dimension d. Then the topology induced on C r p M, M q by this measure of distance between two elements of C r p M, M q is called the C k -topology. For a more thorough discussion about the C k -topology, the reader is referred to [Hir12, PDM12]. Definition 3.1.3(Generic property). A property of a map(resp. vector field) is said to be C k -generic if the set of maps(resp. vector fields) possessing that property contains a residual subset in the C k -topology. We are now in a position to state the main theorem of[Tak81]: Theorem 3.1.4. Let M be a compact manifold of dimension d and Φ: M Ñ M a smooth diffeomorphism. Then for f: M Ñ R a smooth function(at least C 2 ) and k “ 2 d ` 1, it is a generic property that the observation map D k r f, Φ s : M Ñ R k , defined by D k r f, Φ sp x q“ ` f p x q , f p Φ p x qq ,..., f p Φ k ´ 1 p x qq ˘ J ,(3.1) is an embedding. Consequently, for a given time-series, Theorem 3.1.4 guarantees that the observation map(3.1), also called the delay-coordinate map, provides a reconstruction of the hidden state space and that it is also a one-to-one embedding of the system’s attractor. 38 3.2 Extension to fractal sets Remark 3.1.5. Takens embedding theorem ensures that distinct points in the observation space correspond to distinct points in the d-dimensional manifold. In particular, it only preserves the attractor topology. Recently, in[EYWR18] a stable embedding result has been presented which preserves the attractors geometry by ensuring that distances between points in the state space are approximately preserved. 3.2 Extension to fractal sets In Theorem 3.1.4 the observation map(3.1) is constructed from time series of a single observed quantity from, e.g. an experiment. However, it is most unlikely that the attracting set which we want to reconstruct from time series via the observation map, is a manifold and has an integer dimension d. Ten years later, in 1991, Takens theorem has been generalized by[SYC91] as follows: first, by replacing“generic” with“probability-one”(in a prescribed sense), and second, by replacing the manifold M by a possibly fractal set. In this section, we will review the main results of [SYC91] which will later allow us to proof an embedding result obtained by[Rob05] for invariant sets of infinite dimensional dynamical systems. 3.2.1 Prevalence Takens theorem states, roughly speaking, that the set of embeddings is an open and dense set of smooth maps. The first statement means that each embedding with an arbitrarily small perturbation is still an embedding. Whereas the second statement means that every smooth map, whether it is an embedding or not, is arbitrarily near an embedding[SYC91]. From an experimentalist view, we would like to know if the particular map that results from analyzing the experimental data is an embedding with probability one. The problem with such a statement is that the space of all smooth maps is infinite dimensional. The notion of probability one on infinite dimensional spaces does not have an obvious generalization from finite dimensional spaces. There is no measure on a Banach space that corresponds to Lebesgue measure on finite dimensional subspaces[HSY92]. Nonetheless, we would like to make sense of“almost every” map having some property, such as being an embedding[SYC91]. In[HSY92], the authors propose a measure-theoretic condition for a property to hold“almost-everywhere” on an infinite dimensional vector space, the so called prevalence. Definition 3.2.1(Prevalence). A Borel subset S of a normed linear space V is prevalent if there is a finite dimensional subspace E of V(the‘probe space’) such that for each v P V, v ` e belongs to S for(Lebesgue) almost every e P E. 39 3 From finite to infinite dimensional embeddings A set S is prevalent means that if we start at any point in the space V and explore along the finite dimensional space of directions specified by the probe space E, then almost every point encountered will lie in S. Following[SYC91] we note that any space containing a probe space for S is itself a probe space for S. Hence, for E Ă E 1 , where E 1 is any finite dimensional space, perturbations of any element of V by elements of E 1 will be in S with probability one. Thus, a prevalent subset of a finite dimensional vector space is simply a set whose complement has zero measure. Moreover, the union or intersection of a finite number of prevalent sets is again prevalent. It follows from the definition that prevalence implies denseness in the C k -topology(cf. Definition 3.1.2) for any k. More generally, prevalence implies denseness in any normed linear space. As already mentioned, it is most unlikely that the attracting set that we want to reconstruct has an integer dimension of d. Hence, it remains to replace the manifold M by a fractal set A. In order to choose a sufficiently large embedding dimension, we need to know the dimension of A. To this end, we will use the so-called boxcounting dimension which gives one possibility to define and approximate a fractal dimension. 3.2.2 Box-counting dimension We start with the following definition of the(lower or upper) box-counting dimension, where we divide the R n into ε-cubes, e.g. at points whose coordinates are ε-multiples of integers(cf.[SYC91]): Definition 3.2.2(Box-counting dimension). Let A Ă Y, where Y Ă R n , be a compact set. For ε ą 0 denote by N Y p A, ε q the minimal number of balls of radius ε necessary to cover the set A. Then d box p A; Y q : “ lim ε Ñ 0 log N Y p A, ´ log ε ε q (3.2) denotes the box-counting dimension of the set A. By taking the limes inferior or limes superior in(3.2), we get the lower and upper box-counting dimension, d LB and d B , respectively. Note that when the box-counting dimension exists, the approximate scaling law N Y p A, ε q« ε ´ d holds, where d “ d box p A; Y q and ε ą 0 sufficiently small. We give some examples of the box-counting dimension: Example 3.2.3. 40 3.2 Extension to fractal sets 1. Consider the unit sphere in R n : Let Y “ R n and A “ t y P R n |} y } ď 1 u be the closed unit sphere in R n . Then N Y p A, ε q« ε ´ n for small ε ą 0 and therefore d box p A; Y q“ n. 2. The middle third Cantor set(cf.[Fal13]): Let Y “ r 0, 1 s and C the Cantor set defined by č C “ C j where C j “ C j ´ 1 3 Y ˆ 2 3 ` C j ´ 1 3 ˙ for j ě 1, and C 0 “ r 0, 1 s . j ě 1 On the one hand, if 3 ´ j ă ε ď 3 ´ j ` 1 , then the 2 j level- j intervals C j of length 3 ´ j provide an ε-cover of C, such that N Y p C, ε q ď 2 j , where N Y p C, ε q is the least number of boxes that cover C. This yields log 2 j log 2 d B p C; Y q ď lim sup j Ñ8 ´ log 3 ´ j ` 1 “ . log 3 On the other hand, any interval of length ε with 3 ´ j ´ 1 ď ε ă 3 ´ j intersects at most one of the level- j intervals of length 3 ´ j used in the construction of C. There are 2 j such intervals, all containing points of C, such that at least 2 j intervals of length ε are required to cover the set C. Hence, N Y p C, ε q ě 2 j and d LB p C; Y q ě lim inf j Ñ8 log 2 j ´ log 3 ´ j ´ 1 “ log 2 . log 3 Thus, d LB p C; Y q“ d B p C; Y q“ d box p C; Y q“ log 2 . log 3 Observe that by Definition 3.2.2 we have to cover the compact set A with a finite number of boxes of radius ε. Since our set-oriented algorithms compute boxcoverings of the invariant sets, they also allow us to compute approximations of the box-counting dimension. To this end, we consider the following example. Example 3.2.4(The Lorenz system). Again, let us consider the Lorenz system from 1963[Lor63] defined by x 9 1 “ σ p x 1 ´ x 2 q , x 9 2 “ x 1 p ρ ´ x 3 q ´ x 2 , x 9 3 “ x 1 x 2 ´ βx 3 , (2.30) where σ “ 10, ρ “ 28 and β “ 8 { 3. For this parameter regime the system possesses a chaotic attractor[Tuc02], which we will denote by A Lor . Its correlation dimension 41 3 From finite to infinite dimensional embeddings (cf.[The90]) is estimated to be « 2. 05[GP83], and its fractal dimension is estimated to be « 2. 06[Lor84]. We consider the discrete dynamical system x j ` 1 “ ϕ p x j q , where ϕ: Y Ñ Y, Y Ă R 3 , denotes the time- T-map of(2.30) where we choose T “ 0. 2. In order to approximate the box-counting dimension of the Lorenz attractor, we first compute a covering Q of the attracting set A Lor via the subdivision algorithm (cf. Section 2.2) and then use Definition 3.2.2 for the approximation of the boxcounting dimension. Given the box-covering of the Lorenz attractor(cf. Figure 3.1), Figure 3.1: Box-counting dimension of the box covering Q of the Lorenz attractor A Lor obtained by the subdivision algorithm(cf. Section 2.2) for Q “ r´ 30, 30 s ˆ r´ 30, 30 s ˆ r´ 13, 67 s and “ 1,..., 24. we can compute the box-counting dimension via d box p Q; Y q“ log N Y p Q, r q , ´ log r where r is the box-radius after subdivision steps, i.e. r “ 2 ´{ d (since each box is subdivided via bisection with respect to each coordinate) and d “ 3 denotes the 42 3.2 Extension to fractal sets dimension of the Lorenz system. For “ 24, we obtain d box p Q 24 ; Y q“ log N Y p Q 24 , r q ´ log r “ log 97256 8 ¨ log 2 “ 2. 0712. Observe that we have only a covering of the Lorenz attractor, i.e. A lor Ă Q for all “ 1,..., 24. Thus, we get a slightly higher box-counting dimension than « 2. 06. Given the definitions of prevalence as well as the box-counting dimension the extension of(Taken’s) Theorem 3.1.4 to fractal sets is as follows: Theorem 3.2.5(Fractal Delay Embedding Prevalence Theorem). Let Φ be a diffeomorphism on an open subset Y Ă R n , and let A Ă Y be compact with box-counting dimension d box p A; Y q“ d. Let k ą 2 d be an integer and assume that for every positive integer p ď k, the set A p of p-periodic points satisfies d box p A p ; Y q ă p { 2, and that the linearization D Φ p for each of these orbits has distinct eigenvalues. Then for almost every smooth function f: Y Ñ R , the observation map D k r f, Φ s : Y Ñ R k defined by D k r f, Φ sp x q“ ` f p x q , f p Φ p x qq ,..., f ` Φ k ´ 1 p x q ˘˘ J , is: 1. One-to-one on A. 2. An immersion on each compact subset C of a smooth manifold contained in A. Remark 3.2.6. Note that Theorem 3.2.5 needs extra assumptions on the dimension of the set of p-periodic points. To motivate this, we consider the case where A is a periodic orbit of a continuous dynamical system with period equal to T of the timeT-map Φ. Topologically, A is a circle and in this case D k r f, Φ s would map A for any observation function f on a diagonal line. This can be prohibited by choosing a sufficiently small sampling time T. Furthermore, if we assume that the vector field on A satisfies a Lipschitz condition with Lipschitz constant L then for T ă π { L there will be no periodic orbits of period T or 2 T. For more details we refer the reader to[SYC91]. The next theorem gives a version of Theorem 3.2.5 for maps g that are only Hölder continuous[Rob05]. Theorem 3.2.7(Finite dimensional delay embedding theorem for Hölder maps). Let A Ă Y Ă R N be a compact subset of Y with upper box-counting dimension d B p A; Y q“ d, and g: Y Ñ Y a map such that g r is a α-Hölder function for any r P N . Let k ą 2 d { α and assume that the set A p of p-periodic points of g satisfies 43 3 From finite to infinite dimensional embeddings d B p A p ; Y q ă p { 2 α for all p “ 1,..., k. Moreover, let h 1 ,..., h m be a basis for the polynomials in N variables of degree at most 2 k. Given any α-Hölder function h 0 : R N Ñ R define m ÿ h θ “ h 0 ` θ j h j . j “ 1 Then the observation map F k : Y Ñ R k defined by F k r h θ , g sp x q“ ` h θ p x q , h θ p g p x qq ,..., h θ p g k ´ 1 p x qq ˘ J (3.3) is one-to-one on A for almost every θ P R m . Observe that in Theorem 3.2.7 not only g but all iterates have the same Hölder exponent. Although this is the case for any Lipschitz map g(for α “ 1, where in this case the condition on k in the theorem reduces to k ą 2 d), it is only true for a subset of α-Hölder functions g(cf.[Rob05]). We conclude this section with an example, where we reconstruct the Lorenz attractor by the observation map defined in Theorem 3.2.5. Example 3.2.8. We use the Lorenz system defined by(2.30), where analogously to Example 3.2.4 we choose σ “ 10, ρ “ 28 and β “ 8 { 3. In Figure 3.2(a) we see the Lorenz attractor obtained via one long-time simulation of(2.30). The corresponding trajectories are shown in Figure 3.2(b). We will use Theorem 3.2.5 to compute (a)(b) Figure 3.2: (a) Lorenz attractor obtained via one long-time simulation of(2.30). (b) Corresponding trajectory of the long-time simulation. a one-to-one image of the Lorenz attractor in an appropriately high-dimensional 44 3.3 Infinite dimensional embedding theory space. To this end, we denote by Φ the time- T-map of the Lorenz system, where we set T “ 0. 04. Since d “ d box p A Lor ; Y q« 2. 06(cf. Example 3.2.4), we choose k “ 5 ą 2 d. Last, we choose our observable f: Y Ñ R to be f p x q“ x 1 . This yields the observation map D k r f, Φ sp x q“ ` f p x q , f p Φ p x qq , f p Φ 2 p x qq , f p Φ 3 p x qq , f p Φ 4 p x qq ˘ J (3.4) which takes consecutive time-snapshots of the x 1 p t q -trajectory of(2.30), say at time t, t ` T,..., t ` 4 T. In Figure 3.3 we show a three-dimensional projection of the embedded attractor corresponding to the long-time simulation shown in Figure 3.2(b). Although we only observe the x 1 p t q -trajectory of the Lorenz 1 63 system, the embedding yields a reconstruction of the attractor which is topologically equal to Figure 3.2 (a). Figure 3.3: Three-dimensional projection of the embedded Lorenz attractor. 3.3 Infinite dimensional embedding theory Note that in the previous sections the manifold M as well as the invariant set A was defined only for finite dimensional dynamical systems. In this section we will review the main results of[HK99, Rob05] which extend Theorem 3.2.5 to the infinite dimensional context. To this end, let us denote by A the invariant set of an infinite dimensional dynamical system defined on a Banach space Y. One natural question that arises is if there is still a possibility to obtain a one-to-one image of A in an appropriate finite dimensional space. Based on the work by Hunt& Kaloshin[HK99], where an infinite dimensional embedding result for linear maps has been proven, Robinson extended the results obtained in[SYC91] to dynamical 45 3 From finite to infinite dimensional embeddings systems on infinite dimensional Banach spaces[Rob05]. It turns out that the same observation map can be used to reconstruct invariant sets of finite dimensional upper box-counting dimension d. However, in addition to the dimension of the set another quantity comes into play, namely the thickness exponent σ. 3.3.1 The thickness exponent In order to formulate the main results of[HK99, Rob05] we first need the definition of the so-called thickness exponent. Definition 3.3.1(Thickness exponent). Let Y be a Banach space, and let X Ă Y be compact. For ε ą 0, denote by d Y p X, ε q the minimal dimension of all finite dimensional subspaces V Ă Y such that every point of X lies within distance ε of V; if no such V exists, then d Y p X, ε q“ 8 . Then σ p X, Y q : “ lim sup ´ log ε d Y p X, ε q ε Ñ 0 is called the thickness exponent of X in Y. Roughly speaking, the thickness exponent σ p X, Y q captures how well X can be approximated from within finite dimensional subspaces of Y. Denoting the minimum distance between X and any k-dimensional linear subspace of Y by Y p X, k q , it was proven in[KR04] that log k σ p X, Y q“ lim sup k Ñ8 ´ log Y p X, , k q i.e. approximately Y p X, k q« k ´ 1 { σ p X, Y q . Moreover, it was observed in[HK99] that the thickness exponent is alway bounded from above by the upper box-counting dimension. Before we show this result, we first have to extend the definition of the upper box-counting dimension to the infinite dimensional context. Definition 3.3.2(Upper box-counting dimension). Let Y be a Banach space, and let X Ă Y be compact. For ε ą 0, denote by N Y p X, ε q the minimal number of balls of radius ε(in the norm of Y) necessary to cover the set X. Then d B p X; Y q : “ lim sup ´ log ε N Y p X, ε q ε Ñ 0 denotes the upper box-counting dimension of X. Lemma 3.3.3. Let X Ă Y be a compact set with upper box-counting dimension d B p X; Y q . Then σ p X, Y q ď d B p X; Y q . 46 3.3 Infinite dimensional embedding theory Proof. For ε ą 0, we cover X with N Y p X, ε q balls of radius ε. Then every point of X lies within ε of the linear subspace V that is spanned by the centers of these balls. Since the dimension of V is not greater than N Y p X, ε q , this implies that d Y p X, ε q ď N Y p X, ε q and the lemma follows directly. Ott, Hunt& Kaloshin suspected that many of the attractors arising in dynamical systems defined by the evolution equations of mathematical physics have thickness exponent zero[OHK06]. In addition, Friz& Robinson have shown that in some sense the thickness exponent is inversely proportional to smoothness[FR99]. Their result does not rely on the dynamics associated with the set X or the form of the underlying equations, but only makes assumptions on the smoothness of functions on X[Rob05]. Lemma 3.3.4([Rob05]). Let Ω Ă R k be bounded. Suppose that X is a subset of L 2 p Ω q that is uniformly bounded in H s p Ω q . Then σ p X, L 2 p Ω qq ď k . s In particular, if X consists of’smooth functions’, i.e. is uniformly bounded in H s p Ω q for every s P N , then σ p X, L 2 p Ω qq“ 0. Consequently, if an attractor is bounded in H s p Ω q for every s, then its thickness exponent is zero. Example 3.3.5. Let us consider the simplified two-dimensional Navier-Stokes equations B u B t ´ ∆ u ` p u ¨ ∇ u q u ` ∇ p “ f p x q , ∇ ¨ u “ 0, where u p x, t q denotes the two-component velocity, p p t q the scalar pressure and f p x q represents a body forcing that maintains motion. Lemma 3.3.4 can be translated to an assumption on the smoothness of the forcing term f. More precisely, the attractor of the two-dimensional Navier-Stokes equation has zero thickness exponent if f P C 8 p Ω q [Rob10]. 47 3 From finite to infinite dimensional embeddings 3.3.2 An infinite dimensional embedding result for linear maps We now have everything necessary to formulate the main result of[HK99]. Theorem 3.3.6([HK99]). Let Y be a Banach space and A Ă Y compact, with upper box-counting dimension d B p A; Y q“ d and thickness exponent σ p A, Y q“ σ. Let N ą 2 d be an integer, and let α P R with 0 ă α ă N ´ 2 d . N ¨ p 1 ` σ q (3.5) Then for a prevalent set of bounded linear maps L: Y Ñ R N there exists a C ą 0 such that C ¨} L p x ´ y q} α ě} x ´ y } for all x, y P A. Note that, since L is bounded and linear this means that L is Lipschitz continuous and one-to-one on A. Moreover, the next lemma states that the image of the set A under a Lipschitz continuous map, i.e. L p A q , has dimension no more than that of the original set A. Lemma 3.3.7([Rob10]). Let X, Y be Banach spaces and A Ă X. Furthermore, let L: X Ñ Y be a Lipschitz continuous map. Then d B p L p A q ; Y q ď d B p A; X q . Proof. Given d ą d B p A; X q , choose ε 0 sufficiently small such that N X p A, ε q ď ε ´ d for all 0 ă ε ă ε 0 . Cover A with no more than ε ´ d closed balls of radius ε. Since L is Lipschitz continuous with Lipschitz constant C ą 0, the image of this cover under L provides a covering of L p A q by sets(not necessarily closed) of diameter no larger than 2 Cε. These sets are certainly contained in closed balls of radius 4 Cε. Therefore N Y p L p A q , 4 Cε q ď ε ´ d and by choosing δ “ 4 Cε we get the equivalent formulation N Y p L p A q , δ q ď ˆ δ ˙ ´ d 4 C “ p 4 C q d δ ´ d . Hence, for d Ñ d B p A; X q we get d B p L p A q ; Y q ď d B p A; X q 48 3.3 Infinite dimensional embedding theory 3.3.3 An infinite dimensional delay embedding result By combining Theorem 3.2.7 and Theorem 3.3.6 Robinson was able to prove the fundamental infinite dimensional embedding result that allows to map invariant sets from infinite dimensional dynamical systems into a finite dimensional space(of sufficiently large dimension k) via a time-delay observation map. We present the result found in[Rob05]: Theorem 3.3.8([Rob05]). Let A be a compact subset of a Banach space Y and suppose that the upper box-counting dimension of A is d B p A; Y q“ d, and that A has the thickness exponent σ p A, Y q“ σ. Choose an integer k ą 2 p 1 ` σ q d and suppose further that A is an invariant set for a Lipschitz map Φ: Y Ñ Y, such that the set A p of p-periodic points of Φ satisfies d B p A p ; Y q ă p {p 2 ` 2 σ q for p “ 1,..., k. Then for a prevalent set of Lipschitz maps f: Y Ñ R the observation map D k r f, Φ s : Y Ñ R k defined by D k r f, Φ sp u q“ ` f p u q , f p Φ p u qq ,..., f p Φ k ´ 1 p u qq ˘ J (3.6) is one-to-one on the invariant set A. Proof. Given k ą 2 p 1 ` σ q d, choose N sufficiently big such that k ą N p 2 ` 2 σ q d. N ´ 2 d Moreover, choose α ă N ´ 2 d N ¨ p 1 ` σ q p cf.(3.5) q such that k ą 2 d { α. By Theorem 3.3.6 there exists a bounded linear function L: Y Ñ R N that is one-to-one on A and which satisfies C ¨} L p x ´ y q} α ě} x ´ y } for all x, y P A. Consider the set X “ LA Ă R N and define the induced mapping g: X Ñ X by g p ξ q“ L Φ p L ´ 1 ξ q . Since A is an invariant set for Φ, the set X is also an invariant set for g, that is, g p X q“ L Φ p L ´ 1 X q “ L Φ p L ´ 1 LA q “ L Φ p A q“ LA “ X. Moreover, g n p ξ q“ L Φ n p L ´ 1 ξ q 49 3 From finite to infinite dimensional embeddings and thus } g n p ξ q ´ g n p η q}“} L Φ n p L ´ 1 ξ q ´ L Φ n p L ´ 1 η q} ď} L }} Φ n p L ´ 1 ξ q ´ Φ n p L ´ 1 η q} ď L n Φ } L }} L ´ 1 ξ ´ L ´ 1 η } ď CL n Φ } L }} ξ ´ η } α , where } L } is the operator norm of L: Y Ñ R N and L Φ is the Lipschitz constant of Φ. Observe that if x is a fixed point of Φ j then ξ “ L x is a fixed point of g j and vice versa. Thus, it follows that X p , the set of all points of X that are p-periodic for g is simply given by X p “ LA p . Since L is Lipschitz, Lemma 3.3.7 yields that the box-counting dimension does not increase for X p , i.e. d B p X p ; R N q ď d B p A p ; Y q . Given a Lipschitz map f 0 : Y Ñ R , we define the α-Hölder map h 0 : X Ñ R by h 0 p ξ q“ f 0 p L ´ 1 ξ q for all ξ P X. With t h j u mj “ 1 a basis for the polynomials in N variables of degree at most 2 k, all the conditions of Theorem 3.2.7 are satisfied, and thus for almost every θ P R m , the observation map on R N given by F k r h θ , g sp ξ q“ ` h θ p ξ q , h θ p g p ξ qq ,..., h θ p g k ´ 1 p ξ qq ˘ J , where m ÿ h θ “ h 0 ` θ j h j , j “ 1 is one-to-one on X. Now consider the observation map on A given by F k r h θ , g sp L x q“ ` h θ p L x q , h θ p L Φ p x qq ,..., h θ p L Φ k ´ 1 p x qq ˘ J . Since L is one-to-one between A and X, and F k r h θ , g s is one-to-one between X and its image, it follows that F k ˝ L is one-to-one between A and its image. If we define f j p x q“ h j p L x q , then each f j is a Lipschitz map from A into R k , and we can write F k r h θ , g sp L x q“ D k r f θ , Φ sp x q“ ` f θ p x q , f θ p Φ p x qq ,..., f θ p Φ k ´ 1 p x qq ˘ J , 50 3.3 Infinite dimensional embedding theory where m ÿ f θ “ f 0 ` θ j f j . j “ 1 Here, t f j u mj “ 1 forms a basis for the linear space of polynomials on LY of degree at most 2 k. It follows that a prevalent set of Lipschitz maps f make the observation map D k r f θ , Φ s one-to-one on A. We note that the condition on the number of time-delay coordinates required increases with the thickness of the set A, i.e. k ą 2 p 1 ` σ q d. In the case when A has zero thickness, e.g. when Lemma 3.3.4 is fulfilled, this reduces to k ą 2 d familiar from Theorem 3.2.5. Remark 3.3.9. Following an observation already made in[SYC91], we note that this result may be generalized to the case where several distinct observables are evaluated. More precisely, for a prevalent set of Lipschitz maps f i : Y Ñ R , i “ 1,..., q, the observation map D k r f 1 ,..., f q , Φ s : Y Ñ R k , u ÞÑ p f 1 p u q ,..., f 1 p Φ k 1 ´ 1 p u qq ,..., f q p u q ,..., f q p Φ k q ´ 1 p u qqq J (3.7) is also one-to-one on A, provided that q ÿ k “ k i ą 2 p 1 ` σ q ¨ d and d p A p q ă p {p 2 ` 2 σ q@ p ď max p k 1 ,..., k q q . i “ 1 51 4 The core dynamical system In this chapter we employ Theorem 3.3.8 in order to construct a finite dimensional dynamical system, the so called core dynamical system(CDS), that will allow us to approximate finite dimensional attractors of infinite dimensional dynamical systems via set-oriented techniques(cf. Chapter 2). However, note that we also can use the main result from[HK99], i.e. Theorem 3.3.6, or Remark 3.3.9, respectively. Large parts of this chapter are also contained in[DHZ16] to which the author has made substantial contributions. Throughout the remainder of this thesis we are interested in dynamical systems of the form u j ` 1 “ Φ p u j q , j “ 0, 1,...,(4.1) where Φ: Y Ñ Y is a Lipschitz continuous operator on a Banach space Y. Unless stated otherwise, Φ will be a time- T-map of an infinite dimensional dynamical system(e.g. of a delay or a partial differential equation). In addition, we assume that Φ has a compact invariant set A, i.e. Φ p A q“ A of(finite) upper box-counting dimension d B p A; Y q“ d and thickness exponent σ p A, Y q“ σ. This assumption is justified by several classical results, where it has been shown that(non-trivial) global compact attractors for many dissipative systems in Banach spaces have finite capacity or Hausdorff dimensions(see[MP76, Mn81, CFT85, CL88, Hal10]). In Section 5.2 we will additionally assume that A is a global attractor in the sense that it attracts all bounded sets within Y as t Ñ 8 . The main goal of this thesis is to approximate the invariant set A via set-oriented numerical techniques. To this end, we will make use of the main theorems of Section 3.3.2 and 3.3.3 in order to embed A in a finite dimensional space. Therefore, let us denote by A k the image of A Ă Y under the observation map D k r f, Φ s , that is A k “ D k r f, Φ sp A q ,(4.2) where D k r f, Φ s is the map defined in Theorem 3.3.8 and f is chosen such that D k r f, Φ s is one-to-one on A. Note that A k Ă R k , where k ą 2 p 1 ` σ q d. We will 53 4 The core dynamical system call this particular finite dimensional space R k the observation space. We will now construct the CDS x j ` 1 “ ϕ p x j q , j “ 0, 1, 2,..., with ϕ: R k Ñ R k as follows: we set ϕ “ R ˝ Φ ˝ E,(4.3) where E: R k Ñ Y and R: Y Ñ R k are continuous maps satisfying p E ˝ R qp u q“ u @ u P A and p R ˝ E qp x q“ x @ x P A k (4.4) and Φ: Y Ñ Y is the right hand side of(4.1). Let us be more precise. The CDS ϕ is realized as follows: for the map R we set R “ D k r f, Φ s , where D k r f, Φ s is defined according to Theorem 3.3.8. Next we observe that the requirement p R ˝ E ˜ qp x q“ x for all x P A k (4.5) uniquely defines a map E ˜ : A k Ñ A since R is one-to-one on A. Thus, it remains to extend this map E ˜ to a continuous map E: R k Ñ Y(see Figure 4.1). Figure 4.1: Definition of the map ϕ. For this we employ a generalization of Tietze’s extension theorem[DS88, I.5.3] found by Dugundji[Dug51, Theorem 4.1]: Theorem 4.0.1. Let X be a metric space and A Ă X closed. In addition, let V be a locally convex linear space and p: A Ñ V continuous. Then there is a continuous map P: X Ñ V with P | A “ p such that P p X q is contained in the convex hull of p p A q . We also use a result from elementary topology[Wil04] that guarantees the existence of the inverse map of R on A k . 54 Proposition 4.0.2([Wil04]). Let X be a compact space and Y Hausdorff. Then the injective continuous map f: X Ñ Y is a homeomorphism. Proof. Let E Ă X be a closed set. By assumption, X is compact and therefore, E is compact as well. f is continuous yields that f p E q is compact. Since Y is Hausdorff, f p E q is also closed in Y. Thus f is a closed map, and hence a homeomorphism. Using Theorem 4.0.1 and Proposition 4.0.2, we obtain the following result: Proposition 4.0.3. There is a continuous map ϕ: R k Ñ R k satisfying ϕ p R p u qq“ R p Φ p u qq for all u P A.(4.6) Proof. By construction, the map R “ D k r f, Φ s : Y Ñ R k given by(3.6) is continuous (even Lipschitz) and one-to-one. Thus, restricting R to A we obtain a bijective map R ˜ : A Ñ A k . As A is assumed to be compact and A k Ă R k is Hausdorff, R ˜ is a homeomorphism by Proposition 4.0.2. Thus we obtain a continuous map E ˜ : A k Ñ A as E ˜ “ R ˜ ´ 1 . As Y is a normed linear space, it is locally convex. Thus we can apply Theorem 4.0.1 with X “ R k , A “ A k , p “ E ˜ and V “ Y to see that there is a continuous map E: R k Ñ Y with E | A k “ E ˜ . Finally, by(4.3) ϕ is continuous as a composition of continuous maps. Note that by Proposition 4.0.3 the CDS is well defined in R k . Therefore, we will be able to use our set-oriented techniques introduced in Chapter 2 for the CDS. Furthermore, observe that due to(4.4) A k is an invariant set for ϕ, i.e. ϕ p A k q“ p R ˝ Φ ˝ E qp A k q “ p R ˝ Φ qp A q “ R p A q“ A k , and that the dynamics of ϕ on A k is topologically conjugate to that of Φ on A. Here we have used the fact that the map E is the inverse of the map R on A k and that A is an invariant set of Φ. Remark 4.0.4. 1. Note that the arguments used in the proof of Proposition 4.0.3 only guarantee existence of the continuous map E and provide no guideline on how to design or approximate it. In fact, the particular realization of the map E will depend on the actual application. In Chapter 6, we will show possible realizations for delay differential equations with state dependent time delay as well as partial differential equations. 55 4 The core dynamical system 2. As already mentioned, we can also use Theorem 3.3.6 or Remark 3.3.9 in order to define the observation map R, that is, we are not restricted to delaycoordinates of one observable f. In fact, in Section 6.2.3 we will use a linear map as our observation map where we choose k different observables. In this chapter we have constructed a finite dimensional dynamical system which dynamics on the embedded invariant set A k is topologically conjugate to that of Φ on A, i.e. to the dynamics of our infinite dimensional dynamical system(4.1). In the next step, we want to use the CDS in order to approximate the embedded invariant set A k by the subdivision method introduced in Section 2.2. Moreover, we also want to approximate unstable manifolds W Φ u p u ˚ q Ă A, where u ˚ P A denotes an unstable steady state of Φ. In particular, we focus on unstable manifolds for semiflows of Banach spaces(cf.[Hal71, Hen06, Car12, CH12]). To this end, we denote by W u p p q“ R p W Φ u p u ˚ qq Ă A k (4.7) the image of the unstable manifold W Φ u p u ˚ q Ă A under the observation map, the so-called embedded unstable manifold, where p “ R p u ˚ q . Note that, by construction of the CDS, W u p p q is also an invariant set of ϕ. Analogous to finite dimensional dynamical systems, we will use the continuation method introduced in Section 2.3 to approximate the embedded unstable manifold W u p p q . Before we come to this, we first need an extension of the set-oriented techniques discussed in Chapter 2 to the case where the underlying finite dimensional dynamical system is just continuous (and not a homeomorphism). 56 5 Set-oriented techniques for embedded invariant sets The main goal of this thesis is to approximate invariant sets A Ă Y of the infinite dimensional dynamical system u j ` 1 “ Φ p u j q , j “ 0, 1,...,(4.1) where Φ: Y Ñ Y is a Lipschitz continuous operator on a Banach space Y. Through the construction of the core dynamical system(CDS) x j ` 1 “ ϕ p x j q j “ 0, 1,...,(5.1) we are now in the position to approximate the embedded attractor A k (cf.(4.2)) or the embedded unstable manifold W u p p q (cf.(4.7)). In Section 5.1 we start by extending the classical subdivision method to continuous discrete dynamical systems, where we also want to approximate the global attractor relative to a compact set Q. Then we will assume in Section 5.2 that the invariant set A is attracting and show how to approximate it by the subdivision method. Finally, in Section 5.3 we will also show how to use the continuation method introduced in Section 2.3 to approximate unstable manifolds of the underlying infinite dimensional dynamical system. The results presented in this chapter are also contained in[DHZ16, ZDG18] to which the author has made substantial contributions. 5.1 Extension to continuous dynamical systems We briefly review the contents of Section 2.2 that will be necessary for the proof of convergence of the subdivision algorithm in the case when the dynamical system is just continuous. The global attractor relative to a compact set Q is defined by A Q “ č ϕ j p Q q j ě 0 57 5 Set-oriented techniques for embedded invariant sets (see Definition 2.1.2). Furthermore, we denote by Q the union of compact subsets obtained after subdivision steps(cf.(2.4) and(2.6)), that is, ď Q “ B. B P B (5.2) Moreover, let B 0 be a finite collection of closed subsets with Q 0 “ Ť B P B 0 B “ Q. Then the main convergence result of[DH97](cf. Proposition 2.2.5) states that lim h p A Q , Q q“ 0, Ñ8 where h p B, C q is the Hausdorff distance between two compact subsets B, C Ă R n . However, in that work the authors assume that ϕ is a homeomorphism and not just continuous, as in the situation here. In the following we present a proof of convergence for continuous ϕ. In order to prove convergence, we will essentially follow the structure of the proof of the main result in Section 2.2. However, there are some technical differences, and we will need one additional assumption on A Q . More precisely, we assume that ϕ ´ 1 p A Q q Ă A Q . This is automatically satisfied in the case where ϕ is a homeomorphism. Moreover if A k is attracting and A k “ A Q then A Q is backward invariant. These observations justify this assumption. Before we show the convergence result we start with the following lemma. Lemma 5.1.1. Suppose that B Ă Q satisfies B Ă ϕ p B q . Then B Ă A Q . Proof. From B Ă ϕ p B q it follows that ϕ j p B q Ă ϕ j ` 1 p B q for all j ě 0. Hence B “ č ϕ j p B q Ă č ϕ j p Q q“ A Q . j ě 0 j ě 0 Next we will show that the box covering Q obtained by the subdivision algorithm approaches the relative global attractor as tends to infinity. Proposition 5.1.2. Let ϕ: R k Ñ R k be continuous and suppose that A Q satisfies ϕ ´ 1 p A Q q Ă A Q . Then A Q “ Q 8 , where 8 č Q 8 “ Q. “ 1 (5.3) 58 5.2 Computation of embedded attractors via subdivision Proof. We first show that Q 8 Ă A Q . To this end, let y P Q 8 . Then for every ě 0 there exists a unique B p y q P B with y P B p y q . By the selection step of the subdivision scheme(see(2.6)), there is a z P Q with ϕ p z q P B p y q . Choosing a convergent subsequence of p z q , if necessary, we may assume that z “ lim Ñ8 z. By construction, z P Q 8 , and since lim Ñ8 diam p B p y qq“ 0 we conclude that lim Ñ8 ϕ p z q“ y. Finally ϕ is continuous, and therefore y “ ϕ p z q . Hence y P ϕ p Q 8 q . Since y P Q 8 was chosen arbitrary, this yields Q 8 Ă ϕ p Q 8 q . By construction, Q 8 Ă Q and by Lemma 5.1.1 we obtain Q 8 Ă A Q . Since ϕ ´ 1 p A Q q Ă A Q by assumption, the inclusion A Q Ă Q 8 follows directly from the proof of Lemma 2.2.2 in Section 2.2 and we have proven convergence. In the next section we will show how to approximate the set A k if the invariant compact set A is a global attractor. 5.2 Computation of embedded attractors via subdivision In this section, we additionally assume that A Ă Y is an attracting set. Analogously to Section 2.1, we call A an attracting set with fundamental neighborhood U, if for every open set V Ą A there exists a N P N such that Φ j p U q Ă V, @ j ě N. Moreover we assume that the set Q is chosen in such a way that A k Ă Q and E p Q q Ă U.(5.4) Hence, for every x P Q, the iterates Φ j p E p x qq will eventually approach the attracting set A for j Ñ 8 . The main goal of this section is to present a convergence result for the approximation of attracting sets A. We begin this section with the following observation. 59 5 Set-oriented techniques for embedded invariant sets Proposition 5.2.1. Let A Q be the global attractor relative to the compact set Q, and suppose that Q is chosen such that(5.4) is satisfied. Then A k Ă A Q .(5.5) Proof. By construction of the CDS(see(4.3)–(4.5)), we have ϕ p A k q“ A k . Thus, Lemma 5.1.1 implies that A k Ă A Q . Remark 5.2.2. Observe that we can in general not expect that A k “ A Q . In fact, by construction A Q may contain several invariant sets and related heteroclinic connections. In this sense A k will be embedded in A Q . Note that by Proposition 5.2.1, the set A Q defined in Section 2.2 contains the oneto-one image A k of the invariant set A of Φ, where Φ is the right hand side of our abstract infinite dimensional dynamical system(4.1). We now show that by using sufficiently high powers of Φ we can actually approximate a one-to-one image of A if A is attracting. Observe that the fact that for every x P Q the iterates Φ j p E p x qq will eventually approach the attracting set A for j Ñ 8 does not guarantee that A k is also an attracting set for the dynamical system ϕ. For instance, it may be the case that for a certain x ¯ P Q one has a“spurious fixed point” in the sense that x ¯ “ ϕ p x ¯ q although Φ p E p x ¯ qq ­“ E p x ¯ q may be closer to A than E p x ¯ q . In order to overcome this problem we now define for m ě 1 the continuous maps ϕ m “ R ˝ Φ m ˝ E(5.6) and denote the corresponding relative global attractors by A mQ , where A mQ “ č ϕ jm p Q q . j ě 0 Remark 5.2.3. 1. Obviously A is an invariant set for Φ m for every m and therefore we can still use R as the observation map in our construction of the CDS ϕ m on R k . 2. Note that in general ϕ m ­“ ϕ m since equality can only be guaranteed on A k . Lemma 5.2.4. A k Ă A mQ for all m ě 1. Proof. Since Φ p A q“ A we have ϕ m p A k q“ A k for m ě 1. Moreover A k Ă Q(see (5.4)), and Lemma 5.1.1 implies that A k Ă A mQ . 60 5.2 Computation of embedded attractors via subdivision Now let us define A 8 Q “ č A mQ . m ě 1 Then the following convergence result holds: Proposition 5.2.5. Suppose A is an attracting set with basin of attraction U Ą A and choose Q Ă R k such that A k Ă Q and E p Q q Ă U. Then A k “ A 8 Q . Proof. It follows directly from Lemma 5.2.4 that A k Ă A 8 Q . Thus, it remains to show that A k Ą A 8 Q . To this end, suppose that x P A 8 Q z A k . As A k is compact, this implies dist p x, A k q“ ą 0. Now A is also compact, R is continuous and A k “ R p A q . Therefore there is δ ą 0 such that dist p u, A q ă δ ñ dist p R p u q , A k q ă 2 for u P Y. Let V “ E p Q q . Since A is attracting and V Ă U by assumption(see(5.4)) we can find some m ě 1 such that h p Φ m p V q , A q ă δ, where h is the Hausdorff distance. By our choice of δ it follows that h p R p Φ m p V qq , A k q“ h p ϕ m p Q q , A k q ă . 2 Since dist p x, A k q“ ą 0 this implies that x R ϕ m p Q q . Thus, x R A mQ ñ x R A 8 Q which yields a contradiction. Remark 5.2.6. Roughly speaking Proposition 5.2.5 states that it will be possible to approximate an attracting set for Φ if we perform the computations with appropriately high iterates of Φ. We summarize the numerical realization of the subdivision method for the computation of embedded attractors in Algorithm 5.1. In the next section we will assume that u ˚ P Y is an unstable steady state of the infinite dimensional dynamical system Φ(cf.(4.1)) and we denote by W Φ u p u ˚ q P A the corresponding unstable manifold. Moreover, let us denote by p “ R p u ˚ q the image of u ˚ under the observation map R. In the next section we will show how to approximate the embedded unstable manifold W u p p q , i.e. W u p p q“ R p W Φ u p u ˚ qq 61 5 Set-oriented techniques for embedded invariant sets Algorithm 5.1 The subdivision method for embedded attractors Initialization: choose k ą 2 p 1 ` σ q d P N , where d denotes the upper box-counting dimension and σ the thickness exponent of A. Choose an initial box Q Ă R k , defined by a k-dimensional cube of the form Q p c, r q“ t y P R k : | y i ´ c i | ď r i for i “ 1,..., k u , where c, r P R k , r i ą 0 for i “ 1,..., k, are the center and radii, respectively. Start the subdivision algorithm with B 0 such that Ť B P B 0 “ Q. 1. Realization of the subdivision step: in step p ´ 1 q , subdivide each box B P B ´ 1 of the current collection by bisection with respect to the s-th coordinate, where s is varied cyclically. Thus, in the new collection B ˆ the number of boxes is increased by a factor of 2(cf.(2.4),(2.5)). 2. Realization of the selection step: within each box B ˆ P B ˆ choose a finite set of test points and replace the condition(2.6) by B “ ! B P B ˆ : D B ˆ P B ˆ such that ϕ p x q P B for(at least) one x P B ˆ ) .(5.7) 3. Repeat steps(1) and(2) until a prescribed size ε of the diameter relative to the initial box Q is reached. That is, stop when diam p B q ă ε diam p Q q . via an extension of the set-oriented continuation method introduced in Section 2.3. 62 5.3 Continuation for embedded unstable manifolds 5.3 Continuation for embedded unstable manifolds In what follows, let us denote by u ˚ P Y the unstable steady state of the infinite dimensional dynamical system(4.1). Furthermore, we assume that W Φ u p u ˚ q Ă A. The continuation starts at p “ R p u ˚ q of the embedded unstable manifold W u p p q“ R p W Φ u p u ˚ qq ,(5.8) where R is defined according to Theorem 3.3.8 or Theorem 3.3.6, respectively. Moreover, we choose a compact set Q Ă R k containing p in which we want to approximate W u p p q . In the following we assume that Q is large enough so that it contains the entire embedded unstable manifold of p, i.e. W u p p q Ă Q.(5.9) However, this assumption can be relaxed, and we will discuss this point later in the context of the realization of the approximation scheme. In what follows, let us assume that C B “ P s p p q is a neighborhood of p and let us denote by A C B the embedded local unstable manifold satisfying A C B “ W u p p q X C B . Then, the numerical realization of the continuation algorithm for the approximation of embedded unstable manifolds can be described as follows(see Section 2.3): Remark 5.3.1. Due to the compactness of Q the continuation in Step 3 of Algorithm 5.2 will terminate after finitely many, say J, steps. We denote the corresponding box covering obtained by the continuation method by J G “ ď C j p q “ C J p q . j “ 0 (5.11) We expect that the algorithm, as constructed, generates an approximation of the embedded unstable manifold W u p p q . In particular, we expect that the bigger s and are the better the approximation will be. Analogously to Section 2.3 let us denote by W j Ă W u p p q subsets of the embedded unstable manifold, where W 0 “ W u p p q X C B “ A C B and W j ` 1 “ ϕ p W j q for j “ 0, 1,..., and by C j p q “ ď B B P C j p q the unions obtained after the-th continuation step. Then, the following convergence result holds: 63 5 Set-oriented techniques for embedded invariant sets Algorithm 5.2 The continuation method for embedded unstable manifolds Initialization: Given k ą 2 p 1 ` σ q d we choose an initial box Q Ă R k , defined by a k-dimensional cube of the form Q p c, r q“ y P R k : | y i ´ c i | ď r i for i “ 1,..., k ( , where c, r P R k , r i ą 0 for i “ 1,..., k, are the center and the radii, respectively. Choose a partition P s of Q and a box C B P P s such that p “ R p u ˚ q P C B . 1. Apply the subdivision algorithm(cf. Algorithm 5.1) with subdivision steps to B 0 “ t C B u to obtain a covering B Ă P s ` of the embedded local unstable manifold A C B . 2. Set C 0 p q “ B. 3. Realization of the continuation step: in each box B 1 P C j p q choose a finite set of test points and replace the condition(2.17) by !) C j p`q 1 “ B P P s ` : D B 1 P C j p q and x P B 1 such that ϕ p x q P B.(5.10) Proposition 5.3.2. 1. The sets C j p q are coverings of W j for all j, “ 0, 1,.... Moreover, for fixed j, we have 8 č C j p q “ W j . “ 0 2. Suppose that W u p p q Ă Q is linearly attracting, i.e. there exist a λ P p 0, 1 q and a neighborhood U Ą W u p p q such that dist p ϕ p y q , W u p p qq ď λ dist p y, W u p p qq@ y P U.(5.12) Then the box coverings obtained by Algorithm 5.2 converge to the closure of the embedded unstable manifold W u p p q . That is, 8 č G “ W u p p q . “ 0 64 Proof. 5.3 Continuation for embedded unstable manifolds 1. This proof is in principal identical to the proof of Proposition 2.3.4. The first statement follows directly from the fact that the set B obtained by the subdivision algorithm is always a covering of W 0 “ A C B for all ě 0(cf. Proposition 5.1.2). Furthermore, we observe that by Proposition 5.1.2 C 0 p q converges to the relative global attractor A C B “ W 0 for Ñ 8 . Since j is fixed a continuity argument shows that the sets C j p q converge to W j for Ñ 8 , i.e. 8 č C j p q “ W j . “ 0 2. For each Algorithm 5.2 yields a covering of W u p p q , i.e. 8 č G Ą W u p p q . “ 0 Thus, it remains to show that 8 č G Ă W u p p q . “ 0 To this end, suppose there is a x P `Ş 8 “ 0 G ˘ z W u p p q . Since W u p p q is compact, this yields dist p x, W u p p qq ą 0. Observe that due to the realization of the continuation method, for each ě 0, Algorithm 5.2 generates a diam p B q pseudo orbit t x 0 ,..., x j p q u , where x j p q “ x, that is x j P C j p q and ϕ p x j q P P s ` p x j ` 1 q@ j P t 0,..., j p q ´ 1 u . Here, we denote by P s ` p x j ` 1 q Ă C j p`q 1 the element of P s ` which contains x j ` 1 P C j p`q 1 and j p q“ min t j P t 0,..., J u : x P C j p q u , i.e. x is covered after j ě j p q continuation steps. Observe that the sequence j p q is monotonically increasing in(cf. Step 3 of Algorithm 5.2 and item 1 of Remark 2.3.3) and that } x j ´ ϕ p x j ´ 1 q} ď diam p B q@ j P t 0,..., j p q ´ 1 u .(5.13) Let us suppose that j p q is bounded by some J P N 0 , i.e. max j p q“ J. Hence, by monotony of j p q there is 0 P N 0 such that j p q“ J for all ě 0 . Using 65 5 Set-oriented techniques for embedded invariant sets item 1 of Remark 2.3.3 we have ˜ 0 ´ 1 ¸ ˜ 8 ¸ 8 8 x P č C j ppqq X č C J p q Ă č C J p q “ č C J p q . “ 0 “ 0 “ 0 “ 0 However, since J is fixed, by the first part of Proposition 5.3.2 the latter is equal to W J Ă W u p p q which is a contradiction to dist p x, W u p p qq ą 0. Hence, j p q Ñ 8 for Ñ 8 . Let us now suppose that, w.l.o.g., Q Ă U. If this is not the case, we choose Q sufficiently small such that W u p p q Ă Q Ă U. Then,(5.12),(5.13) and the triangle inequality yields dist p x, W u p p qq ď dist p ϕ p x j p q´ 1 q , W u p p qq` diam p B q ď λ dist p x j p q´ 1 , W u p p qq` diam p B q ... j p q´ 1 ď λ j p q dist p x 0 , W u p p qq` diam p B q ÿ λ i i “ 0 ď λ j p q dist p x 0 , W u p p qq` diam p B 1 ´ λ q ÝÑ 0 for Ñ 8 . The last expression converges to zero since λ P p 0, 1 q and diam p B q Ñ 0 for Ñ 8 (see(2.5)). This however contradicts the fact that dist p x, W u p p qq ą 0 and it follows that 8 č G Ă W u p p q , “ 0 which yields the desired statement. Remark 5.3.3. 1. The assumption in part 2 of Proposition 5.3.2 is, for instance, not satisfied if W u p p q forms a heteroclinic connection between the steady state solution p and another hyperbolic steady state q. In fact, in this case the algorithm also generates a covering of the embedded unstable manifold of q(cf. Figure 2.3). 2. If W Φ u p u ˚ q is attractive but(5.12) is not satisfied, one can apply the subdivision algorithm(cf. Section 5.2) starting with the box covering G in order to approximate W u p p q more accurately. 3. Observe that(5.12) is satisfied if the observation map R is bi-Lipschitz with Lipschitz constant L ě 1 and W Φ u p u ˚ q is linearly attractive with λ P p 0, L ´ 2 q . 66 6 Applications In this chapter we will use the set-oriented techniques introduced in Chapter 5 to approximate finite dimensional invariant sets A of infinite dimensional dynamical systems. Typical application scenarios in which finite dimensional invariant sets in infinite dimensional dynamical systems arise include delay differential equations with small time delay([Dri68, Chi03, CMRV05]) and certain types of dissipative dynamical systems described by partial differential equations, including the Kuramoto-Sivashinsky equation[FNST86, JKT90, Rob94], the Ginzburg-Landau equation[DGHN88], or several reaction-diffusion equations[Jol89]. In all these cases, a finite dimensional so-called inertial manifold exists to which trajectories are attracted exponentially fast[CFNT88, FJK ` 88, Tem97]. In Section 6.1 we present a numerical realization of the core dynamical system(CDS) ϕ and illustrate the applicability of the set-oriented methods introduced in Chapter 5 by the computation of embedded invariant sets and invariant measures of delay differential equations. Analogously, in Section 6.2 we show a numerical realization of ϕ for partial differential equations and conclude with illustrations of several unstable manifolds of the one-dimensional Kuramoto-Sivashinsky equation. The results presented in this chapter are also contained in[DHZ16, ZDG18] to which the author has made substantial contributions. 6.1 Delay differential equations As a first application we consider so-called delay differential equations(DDEs). DDEs are also called time-delay systems and compared to ordinary differential equations the time derivative of the unknown function not only depends on the current state but also on previous times[Kua93]. Hence, in order to compute a solution of a DDE an initial history over a time interval, and thus an initial function, has to be known. DDEs are of particular interest when more realistic models are required in which time-delayed aftereffects have to be considered, e.g. for applications in population dynamics, epidemiology and mechanics[Kua93, CR00, AHD07, KM13]. In this section, we restrict our attention to DDEs with(small) state dependent time delays. However, we will also consider DDEs with constant time delay. As a first 67 6 Applications step, let us define the infinite dimensional dynamical system u j ` 1 “ Φ p u j q , j “ 0, 1,..., where Φ: Y Ñ Y is a Lipschitz continuous operator on a Banach space Y and then present a numerical realization of the CDS ϕ(cf. Chapter 4). Parts of the results presented in this section have appeared in[DHZ16]. The author has made significant contributions to the results presented therein. Throughout this section, we will consider DDEs of the general form y 9 p t q“ g p y p t q , y p t ´ τ p y qqq , 0 ď t ď t f , y p t q“ u p t q , t ď 0 (6.1) where y: R Ñ R n , τ: R n Ñ R ą 0 denotes the time delay and g: R n ˆ R n Ñ R n is smooth map. The basic assumptions for the delay function τ are as follows: (A1) τ is continuously differentiable, (A2) 0 ă τ p y q ď τ ¯ for all y P R n . While(A1) is a mild smoothness assumption, condition(A2) guarantees that the time delay does not vanish and more importantly that all solutions’forget’ their history prior to a maximal value τ ¯. Observe that both assumptions are always fulfilled in case of a constant time-delay τ ą 0. Moreover, we denote by u: r´ τ ¯, 0 s Ñ R n the initial condition of(6.1) and by C “ C pr´ τ ¯, 0 s , R n q the(infinite dimensional) state space of the dynamical system(6.1)(cf.[HL93]). Equipped with the maximum norm, C is a Banach space. Let y u be the trajectory generated by(6.1) with the initial condition u P C. Then the flow Φ s : C Ñ C of(6.1) is given by Φ s p u qp t q“ y u p s ` t q for t P r´ τ ¯, 0 s . Next we fix T ą 0 and consider the corresponding time- T-map Φ T : C Ñ C as our dynamical system. That is, we set Φ “ Φ T and Y “ C(6.2) in our abstract infinite dimensional dynamical system(4.1). In what follows we assume that upper bounds for both the box-counting dimension d “ d p A; Y q and the thickness exponent σ “ σ p A, Y q are available. This allows us to fix k ą 2 p 1 ` σ q d 68 6.1 Delay differential equations according to Theorem 3.3.8 and Remark 3.3.9, respectively. In order to numerically realize the CDS ϕ: R k Ñ R k , ϕ “ R ˝ Φ ˝ E (cf. Chapter 4), we have to work on three tasks: the implementation of the continuous map E: R k Ñ Y, of the observation map R: Y Ñ R k and of Φ T : Y Ñ Y. For the latter we will rely on standard methods for forward time integration of DDEs [BZ13]. A standard approach for solving DDEs of the form(6.1) consists of solving step by step the local problems w n 1 ` 1 p t q“ g p w n ` 1 p t q , x p t ´ τ p w n ` 1 qqq , t n ď t ď t n ` 1 , w n ` 1 p t n q“ y n , (6.3) via Runge-Kutta methods[But87], where $ ’& u p s q x p s q“ η p s q ’% w n ` 1 p s q for s ď 0, for 0 ď s ď t n , for t n ď s ď t n ` 1 , and η is the continuous interpolated solution computed by the method itself up to t n . Observe that in(6.3) we denote by y n the approximation of y p t n q obtained by the Runge-Kutta method. For more details about the Runge-Kutta methods for DDEs we refer the interested reader to[BZ13]. Depending on the underlying DDE(6.1) the observation map R will be realized on the basis of Theorem 3.3.8 or Remark 3.3.9, respectively. For the numerical construction of the continuous map E we will employ a bootstrapping method that re-uses results of previous computations. This way we will guarantee, in particular that the identities in(4.4), i.e. p E ˝ R qp u q“ u @ u P A and p R ˝ E qp x q“ x @ x P A k , are at least approximately satisfied. In fact, the identity p R ˝ E qp x q“ x will always be satisfied. 6.1.1 Numerical realization of the delay-coordinate map R For the definition of R we have to specify the time span T and appropriate corresponding observables. Note that by assumption(A2) the time delay τ: R n Ñ R is bounded from above by τ ¯. Hence, we can fix the length of the time interval where our observations are made. In the case of a scalar equation( n “ 1) we choose the 69 6 Applications observable f to be the function evaluation f p u q“ u p´ τ ¯ q . Therefore, by Theorem 3.3.8 R is given by R p u q“ D k r f, Φ sp u q“ p u p´ τ ¯ q , Φ p u qp´ τ ¯ q ,..., Φ k ´ 1 p u qp´ τ ¯ qq J . (6.4) (6.5) It remains to define the time span T of the time- T-map Φ T (cf.(6.2)). We choose T to be a natural fraction of the maximal time-delay τ ¯, that is τ ¯ T “ K for K P N .(6.6) Remark 6.1.1. 1. Observe that a natural choice for K in(6.6) would be K “ k ´ 1 for k ą 1, where k denotes the embedding dimension(cf. Chapter 4). That is, for each evaluation of R the observable would be applied to a function u: r´ τ ¯, 0 s Ñ R at k equally distributed time steps within the interval r´ τ ¯, 0 s . 2. As described in Chapter 5(see item 1 of Remark 5.2.3) we will frequently replace Φ by Φ m ( m ą 1) in order to speed up the convergence towards the invariant sets A and A k , respectively. An illustration is shown in Figure 6.1. Figure 6.1: Numerical realization of the observation map R for n “ 1, k “ 3(i.e. K “ 2) and m “ 6 K. The evaluation of R p u ¯ q yields k function evaluations at equally distributed time steps within the interval r 5 τ ¯, 6 τ ¯ s . For the numerical analysis of systems of DDEs(i.e. n ą 1) we make use of Remark 3.3.9 as follows: for each component u j of u we define a separate observable f j , j “ 1,..., n, by f j p u q“ u j p ν j q for some ν j P r´ τ ¯, 0 s ,(6.7) 70 6.1 Delay differential equations and choose different time spans T j “ ν j K j for K j P N (6.8) accordingly. Hence, R is given by R “ p u 1 p ν 1 q , Φ T 1 p u 1 qp ν 1 q ,..., Φ K 1 T 1 p u 1 qp ν 1 q ,..., u n p ν n q ,..., Φ K n T n p u n qp ν n qq J . Observe that more general constructions for the observation map R: C Ñ R k can be employed. In fact, due to Theorem 3.3.6, for any k that is sufficiently large an arbitrary linear map L: C Ñ R k will generically be one-to-one on A. Therefore, we can use almost every linear combination of trajectory points computed during forward integration for the construction of the map R. However, R should always be defined in such a way that the conditions p R ˝ E qp x q“ x, @ x P A k , and p E ˝ R qp u q“ u, @ u P A, can numerically be realized. 6.1.2 Numerical realization of the map E In the application of the subdivision scheme for the computation of the embedded attractor A k described in Section 5.2 one has to perform the selection step B “ ! B P B ˆ : D B ˆ P B ˆ such that ϕ p x q P B for(at least) one x P B ˆ ) (see(2.6)), where each box B ˆ P B ˆ is discretized by a finite set of test points x P R k . Analogously, in the application of the continuation scheme for the computation of the embedded unstable manifold W u p p q described in Section 5.3 the numerical realization of the continuation step is defined by !) C j p`q 1 “ B P P s ` : D B 1 P C j p q and x P B 1 such that ϕ p x q P B. (cf.(2.17)). A box is kept or marked, respectively, if there is at least one test point x such that ϕ p x q P B. For the evaluation of ϕ “ R ˝ Φ ˝ E at a test point x we first need to define an initial condition u p t q , t P r´ τ ¯, 0 s , for the forward integration of the DDE(6.1), i.e. u p t q“ E p x q . In the first step of the subdivision procedure and the first continuation step, when no information on A is available, we proceed as follows. In the case of a scalar DDE, that is n “ 1, we construct a piecewise linear(or piecewise spline) function 71 6 Applications u p t q“ E p x q (see Figure 6.2), where u p t i q“ x i ,(6.9) for t i “ ´ τ ¯ ` i ¨ T, i “ 0,..., k ´ 1, where T is defined according to(6.6) and item 1 of Remark 6.1.1. Observe that by this choice of E and R the condition p R ˝ E qp x q“ x is satisfied for each test point x P R k (see(4.4) and item 1 of Remark 6.1.1). Figure 6.2: Numerical realization of E via piecewise linear interpolation, where n “ 1, k “ 3(i.e. K “ 2). The components of x are equally distributed within the interval r´ τ ¯, 0 s and the initial function u “ E p x q is generated by a piecewise linear interpolation. For systems of DDEs( n ą 1) we proceed analogously and distribute the components of x P R k to the components u j of u “ E p x q P R n according to(6.7) and(6.8). Also in this case the condition p R ˝ E qp x q“ x still holds. In the following steps of the subdivision and continuation procedure we make use of the information obtained in the previous steps. Observe that if B P B, then, by the selection step, there must have been a B ˆ P B ´ 1 such that R p Φ m p E p x ˆ qqq P B for at least one test point x ˆ P B ˆ . Analogously, if B P C j , then there must have been a B ˆ P C j ´ 1 such that R p Φ m p E p x ˆ qqq P B for at least one test point x ˆ P B ˆ . Therefore, we can use the information from the computation of Φ m p E p x ˆ qq to construct an appropriate E p x q for each test point x P B. More concretely, in every step of the subdivision procedure, for every set B P B we keep additional information about the trajectories Φ m p E p x ˆ qq that were mapped into B by R in the previous step. In the simplest case, we store s i ě 1 additional equally distributed function values for each interval p´ τ ¯ ` p i ´ 1 q T, ´ τ ¯ ` iT q for i “ 1,..., k ´ 1. When ϕ p B q is to be computed using test points from B, we first use the points in B for which additional information is available and generate the corresponding initial value functions via spline interpolation. Note that the more information we store, the smaller the error } Φ m p E p x ˆ qq ´ E p x q} becomes for 72 6.1 Delay differential equations x “ R p Φ m p E p x ˆ qqq . Here, we use the norm } ¨} of the underlying Banach space Y. That is, we enforce an approximation of the identity p E ˝ R qp u q“ u for all u P Y (see(4.4)). If the additional information is available only for a few points in B, we generate new test points in B at random and construct corresponding trajectories by piecewise linear(or spline) interpolation. 6.1.3 Examples In this section we present results of computations carried out for three different DDEs with(small) state dependent as well as constant time delay. In each case, u p t q is scalar, although for the Arneodo System with time delay the problem is recast into a three-dimensional form in order to obtain a first-order equation. The numerical realization of the map E is as discussed in Section 6.1.2, i.e. when no information is available we will create piecewise linear functions, and otherwise we will use additional information in order to construct initial functions via spline interpolation. A price model with state dependent delay Consider the scalar differential equation ´¯ u 9 p t q“ a u p t q ´ u ` t ´ τ p u q ˘ ´| u p t q| u p t q (6.10) involving some parameter a ą 0 and a state dependent delay τ p u q ą 0. For the constant time delay τ p u q“ 1 this equation has been studied extensively in[BEW04]. If a ă 1 then the zero solution u p t q“ 0, t P R , of(6.10) is(locally) asymptotically stable, whereas in case a ą 1 it is unstable. In addition, if a ą 1 then there exists a so-called slowly oscillating periodic solution p: R Ñ R of(6.10), whose minimal period is given by three consecutive zeros[Stu12]. Moreover, the orbit O “ t p p t q| t P R u of the periodic solution p coincides with the set W Ď z W, where W is a two-dimensional global center-unstable manifold of(6.10) at the zero solution. Note that the birth at a “ 1 of the periodic solution is not a Hopf-bifurcation. The linearization of(6.10) has always a zero eigenvalue and at the critical parameter value a “ 1 a real eigenvalue of the linearization crosses the imaginary axis from the left to right half-plane of C . All those analytical results have been generalized for the case of a state-dependent delay τ p u q ą 0[Stu12]. Here, a further assumption for the delay function τ: R Ñ R is needed: (A3) | τ 1 p u q| ă 1 {p 4 a 2 q for all ´ 2 a ě u ě 2 a. 73 6 Applications This assumption ensures that for any solution u bounded by 2 a the delayed argument t ´ τ p u p t qq is a strictly increasing function of t. This is important for the analytical proof for the existence of(periodic) solutions(cf.[Stu12, BZ13]). Following[Stu12] we choose τ to be a non-constant time-delay function with the stated properties (A1)-(A3), i.e. τ p u q“ exp p´ cu 2 q . The constant c ą 0 may be chosen in such a way that for fixed parameter a ą 0, assumption(A3) is satisfied. In our computations of the embedded attractor we set a “ 2 and c “ 1 and therefore τ ¯ “ max u p τ p u qq“ 1. In this parameter regime there exists a periodic orbit as discussed above which is connected to an unstable equilibrium, i.e. u p t q“ 0, via a center-unstable manifold. Following Section 6.1.1, we set T “ τ ¯ k ´ 1 “ 1 4 and choose the observable f p u q“ u p´ τ ¯ q“ u p´ 1 q (see(6.4)). Moreover, we choose the embedding dimension k “ 5 and the iteration exponent m “ 5(cf. item 2 of Remark 6.1.1), and approximate the relative global attractor A Q Ă R 5 for Q “ r´ 4, 4 s 5 by using Algorithm 5.1. In Figure 6.3 we show successively finer box coverings of the relative global attractor A Q . Here the set A Q consists of a reconstruction of the two-dimensional center unstable manifold of u 0 p t q“ 0 which accumulates on a stable periodic orbit at its boundary. In Figure 6.4(c) we show a box covering of the reconstructed periodic solution itself. It has been obtained by removing a small open neighborhood U of the origin from Q “ r´ 4, 4 s 5 and computing A Q r for Q r “ Q z U. Here, we have also increased the iteration exponent to m “ 10. For the sake of comparison, we also show one direct simulation of(6.10) and a three-dimensional embedding of the trajectory obtained by direct simulation. Next, we also compute the unstable manifold of u 0 p t q“ 0 for t P r´ 1, 0 s . Here we consider a fine partition P s of Q “ r´ 2, 2 s 5 , where s “ 45, and set C B “ P s p p q for p “ R p u 0 q“ p 0, 0, 0, 0, 0 q J . In Figure 6.5(a) we show the embedded center unstable manifold W u p p q obtained by Algorithm 5.2(dark blue) as well as the relative global attractor A Q (gray). Moreover, in Figure 6.5(b) we show the embedded invariant measure obtained by Algorithm 2.3, where we use Algorithm 5.1 for the approximation of the relative global attractor A Q in step 1 of Algorithm 2.3. As expected, the highest density is along the periodic solution at the boundary of the relative global attractor. If we are also interested in the particular states of the underlying infinite dimensional dynam74 6.1 Delay differential equations (a) “ 20(b) “ 30 (c) “ 40(d) “ 50 Figure 6.3: (a)-(d) Three-dimensional projection of successively finer coverings of the relative global attractor A Q for(6.10) after subdivision steps. (a)(b)(c) Figure 6.4: (a) Direct simulation of(6.10) for u 0 p t q“ 1, t P r´ 1, 0 s .(b) Threedimensional embedding of the periodic solution obtained by direct simulation.(c) Box covering of the periodic solution after “ 50 subdivision steps. ical system(6.10) that have high density, i.e. the periodic solution, it suffices to use 75 6 Applications the map E for points in those boxes that have a high density. This generates functions in state space that have high probability. This yields a statistical description of the underlying infinite dimensional dynamical system. (a)(b) Figure 6.5: (a) Three-dimensional projection of the relative global attractor A Q (gray box covering) and the embedded center unstable manifold W u p p q , p “ p 0, 0, 0, 0, 0 q J , for equation(6.10).(b) Embedded invariant measure for (6.10) obtained by a combination of Algorithm 2.3 and Algorithm 5.1. The density ranges from blue(low density) Ñ green Ñ yellow Ñ red(high density). The Arneodo system with delay The second example is a modification of the Arneodo system[ACST85] where a delay is introduced in the first order derivative of u, d 3 u dt 3 p t q` d 2 u dt 2 p t q` du 2 dt p t ´ τ q ´ αu p t q` u 2 p t q“ 0. This equation has been introduced and analyzed in[SV09]. In our computations we use the equivalent reformulation as a three-dimensional first-order system p n “ 3 q u 9 1 “ u 2 , u 9 2 “ u 3 , u 9 3 “ ´ u 3 ´ bu 2 p t ´ τ q` au 1 ´ u 21 . (6.11) The undelayed system(i.e.(6.11) with τ “ 0 and b “ 2) has been studied extensively(e.g.[ACST85, GS84, KO99]). It possesses the equilibria O 1 “ p 0, 0, 0 q and O 2 “ p a, 0, 0 q , the latter is asymptotically stable for a ă 2. At a “ 2 the equilibrium O 2 undergoes a supercritical Hopf bifurcation(cf.[KO99]). For values of a which are slightly larger than two points on the two-dimensional unstable 76 6.1 Delay differential equations manifold of O 2 converge to the corresponding limit cycle on the branch of periodic solutions. For the delayed equation, i.e. τ ą 0, where τ is assumed to be constant(and therefore τ ¯ “ τ) and b “ 2, the Hopf bifurcation occurs at decreasing values of a for increasing values of τ. For fixed a “ 2. 5, the amplitude of the limit cycle grows with increasing values of τ and loses its stability in a period-doubling bifurcation at τ « 0. 11[SV09]. Our purpose is to investigate the structure of the relative global attractor right after the occurrence of the period-doubling bifurcation. Concretely we set a “ 2. 5, τ “ 0. 13, choose the embedding dimension k “ 5, and approximate the relative global attractor A Q Ă R 5 for Q “ r´ 4, 4 s 5 . This way we compute a reconstruction of the two-dimensional unstable manifold of O 2 which accumulates on a period-doubled limit cycle(see Figure 6.6, where we show a three-dimensional projection( x 1 , x 4 and x 5 ) of the box covering). Observe that after the period doubling bifurcation the unstable manifold contains a Moebius strip with the period-doubled periodic solution at its boundary. (a) “ 30(b) “ 45 with the period-doubled orbit computed by direct simulation Figure 6.6: (a)-(b) Successively finer coverings of the three-dimensional projection of the two-dimensional unstable manifold of O 2 for the Arneodo DDE(6.11) after subdivision steps( a “ 2. 5, τ “ 0. 13, embedding dimension k “ 5 and iteration exponent m “ 15; see Section 6.1.1). In this example we have made use of Remark 3.3.9 in our numerical realization. Concretely we have chosen T “ τ { 2 and the following three observables(see(6.7) and(6.8)) f 1 p u q“ u 1 p 0 q , k 1 “ 1, f 2 p u q“ u 2 p´ τ q , k 2 “ 3, f 3 p u q“ u 3 p 0 q , k 3 “ 1. K 2 “ 2, 77 6 Applications Thus, our observation map R can be written as R p u q“ p f 1 p u q , f 2 p u q , Φ p f 2 p u qq , Φ 2 p f 2 p u qq , f 3 p u qq J “ p u 1 p 0 q , u 2 p´ τ q , u 2 p´ τ { 2 q , u 2 p 0 q , u 3 p 0 qq J . Observe that R is linear and therefore also Theorem 3.3.6 can be used in order to justify this construction. The corresponding continuous map E is then defined by ¨ x 1 ˛ E p x q“ ˝ I p x 2 , x 3 , x 4 q ‚ , x 5 where I: R 3 Ñ C ˆ denotes a piecewise spline interpolation with I p x 2 , x 3 , x 4 qp´ τ q“ x 2 , I p x 2 , x 3 , x 4 qp´ τ { 2 q“ x 3 , I p x 2 , x 3 , x 4 qp 0 q“ x 4 . Here, C ˆ “ C pr´ τ, 0 s , R q . Note that the identity p R ˝ E qp x q“ x holds for all x P R 5 . In Figure 6.7(a) we show a three-dimensional projection of the relative global attractor A Q obtained by Algorithm 5.1, where we choose the same projection as above. Observe that we also obtain the one-dimensional unstable manifold of O 1 . The invariant measure is shown in Figure 6.7(b). The highest density is along the period-doubled periodic solution at the boundary of the relative global attractor A Q . (a) “ 40(b) “ 45 Figure 6.7: (a) Relative global attractor A Q for the Arneodo DDE(6.11) after “ 40 subdivision steps.(b) Embedded invariant measure for(6.11). The density ranges from blue(low density) Ñ green Ñ yellow Ñ red(high density). 78 6.1 Delay differential equations The Mackey-Glass equation Our final example is the well-known DDE introduced by Mackey and Glass in 1977 [MG77], namely u 9 p t q“ β 1 u p t ´ τ q ` u p t ´ τ q η ´ γu p t q ,(6.12) where we choose β “ 2, γ “ 1, η “ 9. 65, and a constant time-delay τ “ 2. This equation is a model of blood production, where u p t q represents the concentration of blood at time t, u 9 p t q represents production at time t and u p t ´ τ q is the concentration of blood at an earlier time, when the request for more blood is made. This equation possesses a chaotic attractor[MG77] and it has been studied in several contexts, e.g. for the analysis of global dynamics of DDEs with unimodal feedback[RW07, LR09], for the prediction of chaotic time series[FS87, MBS93, MSR ` 97] or for the analysis of recurrence plots[ZW92]. Direct numerical simulations indicate that the dimension of the corresponding attracting set is approximately d ą 2(cf. Figure 6.8). (a)(b) Figure 6.8: (a) Long-time simulation of(6.12) for the constant initial condition u p t q“ 0. 5, t P r´ 2, 0 s .(b) Three-dimensional embedding of one trajectory obtained by direct simulation. We choose the embedding dimension k “ 7, and approximate the relative global attractor A Q for Q “ r 0, 1. 5 s 7 Ă R 7 . Observe that the functions u 1 p t q“ 0 and u 2 p t q“ 1 for t P r´ 2, 0 s are steady states of(6.12). In Figure 6.9(a) –(b), we show projections of the coverings obtained after “ 35 and “ 63 subdivision steps. Here, we have removed a small open neighborhood U of the origin from Q “ r 0, 1. 5 s 7 . As already mentioned the attractor A is chaotic. Hence, A k is also chaotic. In this particular case the embedded invariant measure possesses a complex structure(see 79 6 Applications (a) “ 35(b) “ 63 Figure 6.9: Successively finer coverings of the relative global attractor A Q for the Mackey-Glass equation(6.12).(a) Box covering after “ 35 subdivision steps.(b) Transparent box covering after 63 subdivision steps depicting the internal structure of A Q . Figure 6.10). If we are also interested in the particular states of the underlying infinite dimensional dynamical system(6.12) that have high density it suffices to use the map E for points in those boxes that have high density. This generates the corresponding functions in state space. Therefore, we also get a statistical description of the underlying infinite dimensional dynamical system. Figure 6.10: Embedded invariant measure for(6.12) obtained by a combination of Algorithm 2.3 and Algorithm 5.1. The density ranges from blue(low density) Ñ green Ñ yellow Ñ red(high density). 80 6.2 Partial differential equations 6.2 Partial differential equations In this section we consider partial differential equations(PDEs) that possess finite dimensional invariant sets. Application scenarios in which such sets arise are certain types of dissipative dynamical systems described by PDEs, including the Kuramoto-Sivashinsky equation[FNST86, JKT90, Rob94], the Ginzburg-Landau equation[DGHN88], or a scalar reaction-diffusion equation with a cubic nonlinearity[Jol89]. In all these cases, a finite dimensional so-called inertial manifold exists to which trajectories are attracted exponentially fast[CFNT88, FJK ` 88, Tem97]. Roughly speaking, an inertial manifold determines how an infinite number of degrees of freedom are completely controlled by only a finite number of degrees of freedom. In this section we will present a numerical realization of the CDS for PDEs of the general explicit form u 9 p y, t q“ F p y, u q , u p y, 0 q“ u 0 p y q ,(6.13) where u: R n ˆ R Ñ R n is in some Banach space Y and F is a(nonlinear) differential operator. Observe that the Banach space Y strictly depends on the underlying PDE. We assume that the dynamical system(6.13) has a well-defined semiflow on Y. The results presented in this section will also appear in[ZDG18]. The author has made significant contributions to the results presented therein. Analogously to the previous section, we assume that upper bounds for both the box-counting dimension d and the thickness exponent σ are available. This allows us to fix k ą 2 p 1 ` σ q d according to Theorem 3.3.6 or Remark 3.3.9, respectively. In order to numerically realize the CDS ϕ “ R ˝ Φ ˝ E described in Chapter 4, we will follow the strategy of the previous section and again work on the following three tasks: the implementation of E, of R, and of the time- T-map of(6.13), denoted by Φ. For the latter we will rely on standard methods for forward time integration of PDEs, e.g. a fourth-order time stepping method for the one-dimensional Kuramoto-Sivashinsky equation[KT05]. Observe that the numerical realization of Φ strongly depends on the underlying PDE. The map R will be realized on the basis of Theorem 3.3.6 or Remark 3.3.9, respectively. For the numerical realization of the continuous map E we present an approach that uses statistical information from previous computations. This step is in particular crucial for the continuation step, since we want to restart the algorithm with initial conditions that fulfill the identities in(4.4) at least approximately, i.e. the initial functions are in a small neighborhood of the unstable manifold. 81 6 Applications 6.2.1 Numerical realization of the observation map R In what follows, we will assume that the function u P Y can be represented in terms of a basis t Ψ i u 8 i “ 1 , i.e. 8 ÿ u p y, t q“ x i p t q Ψ i p y q , i “ 1 (6.14) where the elements Ψ i are chosen from a function space(e.g. the Hilbert space L 2 ). Moreover, we will use the concept of a Galerkin projection, where we want to find a finite dimensional representation of the function u, i.e. S ÿ u p y, t q« x i p t q Ψ i p y q . i “ 1 (6.15) Here, our observation map R will be defined by projecting u onto k coefficients x i (6.15). Following[Rem96], an adequate basis t Ψ i u 8 i “ 1 should satisfy several requirements: 1. The system t Ψ i u 8 i “ 1 must be complete in the sense that the unknown solutions can be represented exactly. 2. For the solution to be unique, the Ψ i have to be linearly independent. From a technical point of view it is beneficial if the Ψ i form an orthogonal system. 3. If the classical method of a Galerkin projection is applied to PDEs, then the Ψ i must meet the boundary conditions of the underlying problem. In[Pei17] those requirements have been discussed in the context of reduced models for PDEs. In order to achieve a significant speedup, another requirement for reduced models has been demanded, that is 4. In order to achieve high numerical efficiency, we want to compute a basis as small as possible. Furthermore, we want to keep as low additional information(i.e. additional coefficients x j , for j “ k ` 1,..., S) as possible to fulfill the requirement p E ˝ R qp u q“ u @ u P A at least approximately. In the next section we will give a short introduction to the proper orthogonal decomposition(POD)(see e.g.[Sir87, BHL93, HLBR12]) that can be used to compute a basis as small as possible. 82 6.2 Partial differential equations Proper orthogonal decomposition We want to compute an optimal basis t Ψ i u Si “ 1 (i.e. as small as possible) in the sense that it contains the’most characteristic’ data from an ensemble of functions. The notion’most characteristic’ implies use of an averaging operation. Furthermore, this basis has to be capable of representing the solution u of the underlying PDE(6.13) with an error as small as possible. Both requirements can be addressed by the proper orthogonal decomposition(see[Sir87, BHL93, HLBR12]) and can be formulated as an optimization problem, where the average(squared) error between the solution u and its projection onto the space spanned by the basis functions t Ψ i u Si “ 1 has to be minimized(see also[Row06, Vol11]): min Ψ 1 ,..., Ψ S P L 2 1 T ż T 0 S } u p¨ , t q ´ ÿ x u p¨ , t q , Ψ i y L 2 Ψ i } 2 L 2 i “ 1 d t, s.t. x Ψ i , Ψ j y L 2 “ δ ij , 1 ď i, j ď S, (6.16) where δ ij is the Kronecker delta. Here, x¨ , ¨y L 2 denotes the inner product in L 2 and } ¨} L 2 the corresponding norm. This procedure is practically realized by defining a time grid 0 “ t 0 ă t 1 ă ... ă t r “ T and taking r snapshots at these time instances [Sir87]. Therefore, this approach is referred as the method of snapshots and the optimization problem(6.16) can be transformed to min Ψ 1 ,..., Ψ S P L 2 1 r r ÿ j “ 1 } u p¨ , t j q ´ S ÿ x u p¨ , t j q , i “ 1 Ψ i y L 2 Ψ i } 2 L 2 , s.t. x Ψ i , Ψ j y L 2 “ δ ij , 1 ď i, j ď S. (6.17) We note that equation(6.17) can equivalently be transformed into max Ψ 1 ,..., Ψ S P L 2 S ÿ i “ 1 1 r r ÿ x u p¨ , j “ 1 t j q , Ψ i y 2 L 2 , s.t. x Ψ i , Ψ j y L 2 “ δ ij , 1 ď i, j ď S, (6.18) see e.g.[HLBR12, Pei17]. Moreover, for S “ 1 this yields 1 r r ÿ x u p¨ , t j q , Ψ 1 y 2 L 2 “ x Ψ 1 , R Ψ 1 y L 2 , j “ 1 where ż ˜ 1 r ÿ ¸ R Ψ 1 “ Ω r u p y, t j q u p z, t j q j “ 1 Ψ 1 p z q d z 83 6 Applications and Ω Ă R n denotes some(spatial) domain. The operator R is linear and selfadjoint and thus, possesses orthonormal eigenfunctions Ψ i with associated positive eigenvalues σ i , i “ 1,..., r. We state the following result which can be found in a similar version in[BHL93]. Lemma 6.2.1. The solution to problem(6.18) is given by the eigenfunctions of R corresponding to the S ă r largest eigenvalues of R. Let us now assume that σ 1 ą σ 2 ą ... ą σ r . Then, the eigenvalues can be utilized to determine the amount of information that is neglected by truncating the basis to size S ă r[Sir87]: p S q : “ ř r i “ S ` 1 σ i ř S j “ 1 σ j .(6.19) It is also possible to determine the amount of information by ¯ p S q : “ ř S i “ 1 ř r j “ 1 σ i 2 σ j 2 .(6.20) Here, the relative importance of the i-th POD mode Ψ i is determined by the relative energy E i of that mode(cf.[Cha00, HLBR12]), defined by E i “ σ i 2 ř r j “ 1 σ i 2 . It remains to show how we can apply this procedure to numerical data obtained by direct numerical simulation(e.g. by using a finite element method) or experiments. As discussed above, we have to take r snapshots at the time instances t 1 “ 0 ă t 1 ă ..., t r “ T. To this end, let us denote by u h p t i q P R n x , t i P t t 1 ,..., t r u , the numerical solution of(6.13) defined on a finite dimensional grid at n x nodes instead of a function space. We then arrange these points in the so-called snapshot matrix ¨ | ˛ | S “ ˝ u h p t 1 q ¨ ¨ ¨ u h p t r q ‚ P R n x ˆ r . || (6.21) Observe that there exists a close relationship between the proper orthogonal decomposition and the singular value decomposition(cf. e.g.[Cha00, LLL ` 02, Vol11]). Therefore, we perform a singular value decomposition[GR70] of the matrix S and obtain S “ U Σ V J , 84 6.2 Partial differential equations where U P R n x ,n x , Σ P R n x ,r and V P R r,r . The columns of U give us a discrete representation of the POD modes Ψ i , whereas the diagonal elements of Σ correspond to the eigenvalues σ 1 ,..., σ r with σ 1 ą σ 2 ą ... ą σ r . By choosing a value for in (6.19) or(6.20), respectively, we can easily determine the optimal length of our basis [Pei17]. For many applications, the eigenvalues decay fast such that a truncation to a small basis is possible. From now on, we assume that the solution to(6.13) at time t P R ` can be represented by S ÿ u p y, t q“ x i p t q Ψ i p y q . i “ 1 Given the basis t Ψ i u Si “ 1 and using the fact that this basis is orthogonal, we then define the observation map by choosing k different observables f i p u q“ x u, Ψ i y“ x i for i “ 1,..., k.(6.22) This yields R p u q“ p f 1 p u q ,..., f k p u qq J “ p x 1 ,..., x k q J .(6.23) Observe that R is linear and hence, for k sufficiently large, Theorem 3.3.6 and Remark 3.3.9, respectively, guarantee that R will be a one-to-one map on A. 6.2.2 Numerical realization of the map E Analogous to the numerical realization of the map E for DDEs(cf. Section 6.1.2), for the evaluation of the CDS ϕ “ R ˝ Φ ˝ E we first need to define the map E, that is, we need to generate initial functions for the forward integration of the PDE(6.13). In the first step of the subdivision or the continuation procedure, respectively, when no information on A is available we simply construct initial functions u “ E p x q by k ÿ E p x q“ x i Ψ i ,(6.24) i “ 1 where t Ψ i u denotes the POD-basis for i “ 1,..., S. Observe that by this choice of E and R the condition p R ˝ E qp x q“ x is satisfied for each test point x(see(4.4) and(6.23)). In the following steps of the subdivision as well as the continuation method we proceed as follows. Note that by following the same strategy as described in Section 6.1.2, we construct initial functions using additional information(i.e. the PODcoefficients x k ` 1 ,..., x S ) obtained in the previous steps. Thus, we construct initial 85 6 Applications functions by S ÿ E p x q“ x i Ψ i . i “ 1 However, if the additional information is available only for a few points, we then generate new test points at random and construct corresponding initial functions via (6.24). Due to the observations we have chosen, this may yield to initial functions that fulfill p E ˝ R qp u q“ u not even close(see Figure 6.11). Hence, it is possible that u generated by(6.24) is not close to the invariant set. For the computation of attracting invariant sets A via the subdivision algorithm, we can ignore this problem and increase the iteration exponent m sufficiently. However, for the continuation method the requirement p E ˝ R qp u q“ u for all u P W Φ u p u ˚ q (see(4.4)) is crucial in order to compute a’nice’ covering of the embedded unstable manifold. Therefore, for observables like POD-coefficients that result from a Galerkin scheme we will present a new strategy that allows to compute initial functions u which fulfill the requirement p E ˝ R qp u q“ u at least approximately. In what follows, we describe this new approach for the continuation method. Note that if B P C j p`q 1 (cf. Algorithm 5.2), then, by the continuation step, there must have been a B ˆ P C j p q such that R p Φ p E p x ˆ qqq P B for at least one test point x ˆ P B ˆ . Since we have to generate initial functions in box B, we make use of the informations obtained during the computation of Φ p E p x ˆ qq to construct E p x q for each new test point x P B. 0.8 No additional information 0.6 New initial function Exact function 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 0 2 Figure 6.11: Illustration of the numerical realization of E for the Kuramoto-Sivashinsky equation in one specific box B after at least one continuation step: the black function represents Φ p E p x ˆ qq from the previous continuation step; the red function is generated by(6.24); the blue function is generated with our new approach. To this end, we compute componentwise the mean value and the variance of all 86 6.2 Partial differential equations points that were mapped in box B in the previous continuation step, i.e. we store additional information for each box B. Then, we make a Monte Carlo sampling for the additional coefficients for those points that are generated in the current step in box B. This yields initial functions of the form S ÿ E p x q“ x i Ψ i , i “ 1 where x i , for i “ k ` 1,..., S, are sampled as described above. Therefore, in each continuation step we generate initial functions that enforce an approximation of the identity p E ˝ R qp u q“ u for all u P W Φ u p u ˚ q . In Figure 6.11 we show a comparison of Φ p E p x ˆ qq (black) with the initial functions generated by(6.24)(red) and our new approach(blue) presented above. 6.2.3 Examples In this section we present results of computations carried out for the KuramotoSivashinsky equation in one spatial dimension which is given by u t ` ν u xxxx ` u xx ` 1 2 p u x q 2 “ 0, p x, t q P R ˆ R ` , u p x, 0 q“ u 0 p x q , u p x ` L, t q“ u p x, t q . (6.25) This equation has been studied extensively over the past 40 years. It has been used to model interfacial turbulence in various physical contexts, such as phase dynamics in reaction-diffusion systems[KT76] or small thermal diffusive instabilities in laminar flame fronts[Siv77]. Moreover, numerical and analytical studies were made(cf.[HNZ86, KNS90]) showing the complex hierarchy of bifurcations. We are, in particular, interested in analyzing the different coherent spatial structures and temporal chaos by computing the unstable manifolds of the trivial unstable steady state u 0 p x q“ 0 for different parameter values. Following[HNZ86, KNS90], we normalize the K-S equation to an interval length of 2 π and set the damping parameter to the original value derived by Sivashinsky, i.e. ν “ 4. To this end, equation(6.25) can be written as u t ` 4 u xxxx ` µ „ u xx ` 21 p u x q 2  “ 0, 0 ď x ď 2 π, u p x, 0 q“ u 0 p x q , u p x ` 2 π, t q“ u p x, t q . (6.26) In equation(6.26) we introduced a new parameter µ “ L 2 { 4 π 2 , where L denotes the size of a typical pattern scale. In order to use our continuation algorithm described in Section 5.3 it is crucial to have a good estimate of the dimension of the 87 6 Applications invariant set A and W Φ u p u ˚ q , respectively. In[Rob94] it has been shown that the dimension of the inertial manifold of(6.25) for ν “ 1 is d « L 2. 46 , i.e. the invariant set is of finite dimension. However, these estimates are very pessimistic and we expect that we obtain a one-to-one image of the unstable manifold for smaller k. In what follows, the observation space is defined through projections on the first k POD-coefficients, where the illustrations will usually show a projection onto the first three POD-coefficients. For each parameter µ we compute the POD-basis by using the snapshot-matrix obtained through a long-time integration of u 0 p x q“ 10 ´ 4 ¨ cos p x q ¨ p 1 ` sin p x qq (cf. Section 6.2.1). The traveling wave For the parameter value µ “ 15, transients for arbitrary initial values get attracted to a traveling wave solution(cf. Figure 6.12(a)). This solution can be described by a function u p x ´ ct q , where c is the wave solution that depends on µ. We observe two waves, traveling in opposite directions[KNS90]. Due to the periodic boundary conditions, the wave appears as a limit cycle, forming a closed curve in the observation space(cf. Figure 6.12(b)). (a) Direct simulation 10 5 0 -5 -10 10 5 0 -5 10 5 0 -5 -10 -10 (b) Observation space Figure 6.12: (a) Direct simulation of the Kuramoto-Sivashinsky equation for µ “ 15. The initial value gets attracted to a traveling wave solution.(b) Corresponding embedding in observation space where the traveling wave appears as a limit cycle. By choosing the embedding dimension k “ 3, we restrict the initial functions in the first continuation step to the function space that is spanned by the first three 88 6.2 Partial differential equations POD-modes. Furthermore, since k “ 3, we expect to approximate just a projection of the unstable manifold and thus, not a one-to-one image of W Φ u p u ˚ q . For a related discussion in the finite dimensional context we refer the interested reader to[SYC91]. We choose Q “ r´ 8, 8 s 3 and initialize a fine partition P s of Q for s P N . Next we set T “ 200. The idea is to do as few continuation steps as possible, since during each continuation step, we have to generate initial functions that are only approximations. Thus, this would probably influence our box covering. In order to still get a fine covering of the unstable manifold, we define a finite time grid t t 0 ,..., t N u , where t N “ T, and mark all boxes that are hit in each time step. This strategy will be used for each example in this section. In Figure 6.13(a)-(d) we illustrate successively finer box coverings of the unstable manifold as well as a transparent box covering depicting the complex internal structure of the unstable manifold. Observe that the boundary of the unstable manifold consists of two limit cycles which are symmetric in the first POD-coefficient x 1 . This is due to the fact that the Kuramoto-Sivashinsky equation(6.26) has Z 2 symmetry. (a) s “ 9 and “ 0(b) s “ 15 and “ 0 (c) s “ 21 and “ 0(d) s “ 27 and “ 0 Figure 6.13: (a)-(c) Successively finer box-coverings of the unstable manifold for µ “ 15. (d) Transparent box covering for s “ 27 and “ 0 depicting the internal structure of the unstable manifold. 89 6 Applications The stable heteroclinic cycle As a next example, we consider the parameter value µ “ 18. The observed longterm behavior consists of a pulsation between two states, which appear to be π { 2 translations of each other, each state being invariant under π-translation. The transients linger close to one of these states for a comparatively long time before they pulse back to the other(cf. Figure 6.14(a)). (a) Direct simulation 15 10 5 0 -5 -10 -15 20 10 00 -10 -10 -20 -20 (b) Observation space 20 10 Figure 6.14: (a) Direct simulation of the Kuramoto-Sivashinsky equation for µ “ 18. (b) Corresponding embedding in observation space depicting the stable heteroclinic loop. It was observed in[KNS90] that the pulsation projected onto the cos p 2 x q and sin p 2 x q coefficient plane, respectively, appears as a straight line passing through the origin. In addition, different pulsations, resulting from different initial conditions, give straight lines that are rotations of each other about the origin. By projecting the pulsation onto the first three POD-coefficients, we observe a similar behavior in observation space. Thus, we expect that the covering of the unstable manifold will be dense. The projection of the long time simulation(cf. Figure 6.15(a)) onto the first three POD-coefficients is shown in Figure 6.14(b). For the initialization of Algorithm 5.2, we choose the embedding dimension k “ 3 and, analogously to the first example, restrict our initial functions to the function space generated by the first three POD-modes. Moreover, we choose Q “ r´ 20, 20 s 3 and set T “ 200. Then we use the same strategy as described above in the first example. In Figure 6.15(a)-(b) we show two box coverings obtained by our continuation method for different values s P N of the partition P s of Q. As expected, the covering of the embedded unstable manifold is dense. This corresponds to the observation mentioned above. 90 6.2 Partial differential equations (b) s “ 12 and “ 0(d) s “ 24 and “ 0 Figure 6.15: (a)-(b) Successively finer box-coverings of the unstable manifold for µ “ 18. The Oseberg transition In the last example we choose µ “ 32. In Figure 6.16(a) and(b) we show a direct simulation as well as the corresponding embedding in the observation space. The initial condition u 0 gets attracted to a so-called bimodal steady state which after some time loses its stability and becomes a stable limit cycle as t Ñ 8 [JJK01]. 3 2 1 0 -1 -2 -3 10 5 0 -5 -5 -10 -10 -15 0 (a) Direct simulation(b) Observation space Figure 6.16: (a) Direct simulation of the Kuramoto-Sivashinsky equation for µ “ 32.(b) Corresponding embedding in observation space. In Figure 6.17(a) and(b) we show successively finer box coverings of the unstable manifold obtained by our continuation method. A very similar result has been obtained by[JJK01] which the authors called the Oseberg transition. In this work, the authors restricted the phase space to the invariant subspace of odd functions in 91 6 Applications which the solutions can be represented by the Fourier sine series 8 ÿ u p x, t q“ b j p t q sin p jx q , j “ 1 (6.27) where an eight-mode Galerkin truncation of(6.27) was used. The projection of their global attractor onto the first, second and third Fourier coefficient looks quantitatively the same as our unstable manifold illustrated in Figure 6.17. (b) s “ 15 and “ 0(d) s “ 27 and “ 0 Figure 6.17: (a)-(b) Successively finer box-coverings of the unstable manifold for µ “ 32. Furthermore, in(b) we show a transparent box covering for s “ 27 and “ 0 depicting the internal structure of the unstable manifold. 92 7 Improving the numerical efficiency In this chapter we will present some efficiency enhancement strategies for the setoriented techniques introduced in Chapter 5. Observe that in both the selection step B “ ! B P B ˆ : D B ˆ P B ˆ such that ϕ p x q P B for(at least) one x P B ˆ ) and the continuation step !) C j p`q 1 “ B P P s ` : D B 1 P C j p q and x P B 1 such that ϕ p x q P B we have to check if there is at least one test point x P B ˆ such that ϕ p x q P B. Here, each box B ˆ is discretized by a(possibly very large) number of test points. Thus, for each evaluation of the selection and continuation step, respectively, we have to evaluate the core dynamical system(CDS) possibly many times. However, due to the construction of the CDS each of its evaluations also includes evaluations of the underlying infinite dimensional dynamical system, e.g. the time- T-map of a PDE. Hence, the overall computational cost of both algorithms does not only depend on the dimension of the embedded invariant set but also on the efficient realization of the selection and continuation step, respectively. It may also be the case that the initial embedding dimension k has been chosen too low. Instead of increasing the embedding dimension and beginning the computation anew, we will follow a new strategy and utilize the existing results as a starting point for the consecutive computation. In Section 7.1, we will develop a modified selection step, where the number of function evaluations of the CDS is decreased by a factor of approximately two. By storing information from previous selection steps of the subdivision algorithm in the random access memory(RAM) and using a slightly modified selection step, this strategy decreases the overall computational time by a factor of approximately four. In Section 7.2 we develop a sequential procedure for the subdivision method which adaptively increases the embedding dimension if it initially has been chosen too low without starting the algorithm anew. This procedure is particularly useful for dynamical systems for which a priori estimates of the upper box-counting dimension and the thickness exponent are not known. Finally, in Section 7.3 we present a Koopman operator based continuation method in which the evaluation 93 7 Improving the numerical efficiency of the CDS is partially replaced by iterations of local Koopman operators. We start by introducing the Koopman operator and its numerical approximation via EDMD. Then we will show how to apply the Koopman operator to the continuation step in order to compute approximations of embedded unstable manifolds more efficiently. 7.1 A modified selection step for the subdivision algorithm In this section we will modify the numerical realization of the selection step in Algorithm 5.1. The selection step is responsible for the fact that the box-coverings obtained by the subdivision method approach the relative global attractor. Roughly speaking, the main idea in this section is to decrease the number of function evaluations of the CDS, and therefore also the number of evaluations of the underlying infinite dimensional dynamical system, by a factor of approximately two. Concretely, let us denote by B 0 “ t Q u the initial box covering of our area of interest, where we want to approximate the relative global attractor A Q . In Chapter 2 and Chapter 5, respectively, we start with a subdivision step that creates a refined box collection B ˆ 1 , i.e. ďď B “ B, where diam p B ˆ 1 q ă diam p B 0 q . B P B ˆ 1 B P B 0 Then we do one selection step which, if necessary, removes all boxes that do not cover the relative global attractor, i.e. B 1 “ ! B P B ˆ 1 : D B ˆ P B ˆ 1 such that ϕ p x q P B for(at least) one x P B ˆ ) . Here, we choose M P N test points x 1 P B ˆ and check the condition ϕ p x 1 q P B for(at least) one x 1 P B ˆ .(7.1) We keep all boxes B for which condition(7.1) holds. In this section, however, we will present a slightly different selection step. First, note that in order to check condition (7.1), we first have to generate initial conditions u “ E p x 1 q for the underlying infinite dimensional dynamical system Φ. At the very beginning of the subdivision algorithm, we construct initial conditions u P Y for which no additional information is available(cf. Section 6.1.2 and 6.2.2 for the numerical realization of E for delay and partial differential equations, respectively). Then, we evaluate each initial function u P Y by the time- T-map of the underlying infinite dimensional dynamical system Φ. 94 7.1 A modified selection step for the subdivision algorithm Finally, we use the observation map R in order to complete one function evaluation of the CDS, i.e. x 1 ÞÑ p R ˝ Φ ˝ E qp x 1 q“ ϕ p x 1 q . Observe that due to the numerical realization of the CDS, we store additional information from the computation of Φ p E p x 1 qq in order to fulfill the identity E p ϕ p x 1 qq“ p Φ ˝ E qp x 1 q at least approximately(cf. Section 6.1.1 and Section 6.2.1, respectively). Let us denote by x 1 ϕ “ ϕ p x 1 q all points, that have been evaluated by the CDS in the first RGA step. Since we have only evaluated points x 1 with no additional information, we do not discard any boxes at this moment, i.e. B 1 “ B ˆ 1 . This may result in a slightly different box covering(cf. Figure 7.1). We store all points x 1 and x 1 ϕ in the random access memory. This completes the first RGA step (i.e. one subdivision and selection step). (a)(b) Figure 7.1: First step of the modified subdivision algorithm for M “ 10.(a) Choose M test points x P B ˆ i , i “ 1, 2(marked by ˝ ).(b) Compute the image of all test points x under ϕ(marked by ˚ ) and keep all boxes. Observe that box B ˆ 2 would have been discarded by applying the classical selection step. Next, we create a refined box collection B ˆ 2 by applying the subdivision step to B 1 . In order to apply the selection step for the computation of B 2 , we again have to choose M P N test points in each box B ˆ P B ˆ 2 . Here, the main difference to the classical selection step is that we first use up to M test points x 1 ϕ P B ˆ , @ B ˆ P B ˆ 2 , that have been evaluated by the CDS in the first RGA step and are stored in the RAM. If a 95 7 Improving the numerical efficiency box B ˆ P B ˆ 2 has less than M test points, we fill B ˆ with test points x 1 P B ˆ for which the image ϕ p x 1 q“ x 1 ϕ already exists. Observe that the evaluation of those points by the CDS is not necessary anymore. Finally, we create new test points x 2 P B ˆ at random such that all boxes B ˆ P B ˆ 2 are discretized by M points. Remark 7.1.1. Note that each box B ˆ P B ˆ 2 is discretized by M ě m 1 ě 0 points x 1 ϕ , M ´ m 1 ě m 2 ě 0 points x 1 , and M ´ m 1 ´ m 2 ě m 3 ě 0 points x 2 , such that m 1 ` m 2 ` m 3 “ M(cf. Figure 7.2(a)). (a)(b) Figure 7.2: Second step of the modified subdivision algorithm for M “ 10.(a) Within each box B ˆ P B ˆ 2 choose M test points according to Remark 7.1.1, where we mark x 1 ϕ by a green, x 1 by a blue and x 2 by a black ˝ .(b) Corresponding image points are marked by a ˚ of the same color. Only boxes 1 and 3 are kept. For the numerical realization of the selection step we have to compute the image of each test point under ϕ. Since the image of x 1 already exists, i.e. ϕ p x 1 q“ x 1 ϕ , we only need to compute ϕ p x 2 q and ϕ p x 1 ϕ q , respectively. We denote by X 2 “ t x 1 , x 2 u the set of all points that have no additional information and for which an image already has been computed. Furthermore, we denote by X ϕ 2 “ t x 1 ϕ , ϕ p x 2 qu the set of the corresponding images. Analogously, we denote by ϕX 2 “ t x 1 ϕ u the set of all points that have been evaluated by the CDS once and by ϕX ϕ 2 “ t ϕ p x 1 ϕ qu the set of the corresponding images. In what follows, we change the condition of the selection step to ϕ p x q P B for(at least) one x P ϕX 2 (7.2) 96 7.1 A modified selection step for the subdivision algorithm and keep all boxes B for which condition(7.2) is fulfilled. An illustration of the second selection step is shown in Figure 7.2(b). The points x P ϕX ϕ 2 are marked by a green ˚ and we keep only boxes B ˆ 1 , B ˆ 3 P B ˆ 2 , i.e. B 2 “ t B ˆ 1 , B ˆ 3 u . This completes the second RGA step. For RGA step “ 3, 4,... our approach works as follows. We create a refined box collection ď B “ ď B, where diam p B ˆ q ă diam p B ´ 1 q . B P B ˆ B P B ´ 1 Then, each box B ˆ P B ˆ is discretized by M ě m 0 ě 0 points x P ϕX ´ 1 , M ´ m 0 ě m 1 ě 0 points x P X ϕ ´ 1 , M ´ m 0 ´ m 1 ě m 2 ě 0 points x P X ´ 1 , M ´ m 0 ´ m 1 ´ m 2 ě m 3 ě 0 points x P B ˆ , (7.3) (7.4) (7.5) (7.6) such that ř 3 i “ 0 m i “ M. If m 1 ą 0, we compute the image of all x P X ϕ ´ 1 . Analogously, if m 3 ą 0, we also compute the image of x P B ˆ . Therefore, instead of evaluating the CDS M-times for each box, this reduces to M ě m 1 ` m 3 ě 0. In boxes that cover areas of the global attractor relative to Q that possess a high density, we often get m 0 “ M. Hence, for those boxes it is not necessary to evaluate the CDS even once. Finally, we update the sets of points X, X ϕ , ϕX and ϕX ϕ as follows: X “ X ´ 1 Y t x u , X ϕ “ X ϕ ´ 1 Y t ϕ p x qu , ϕX “ ϕX ´ 1 Y t x u , ϕX ϕ “ ϕX ϕ ´ 1 Y t ϕ p x qu , @ x P B ˆ , @ x P B ˆ , @ x P X ϕ ´ 1 , @ x P X ϕ ´ 1 . In order to complete the RGA step, for “ 3, 4,..., we check the condition ϕ p x q P B for(at least) one x P ϕX(7.7) and keep all boxes B for which condition(7.7) holds. This completes the-th RGA step. An illustration of step 3 of the modified subdivision algorithm is shown in Figure 7.3. Remark 7.1.2. In general, the sets X ϕ ´ 1 and ϕX ´ 1 , respectively, contain more than M points for at least one box B ˆ P B ˆ . Hence, in order to prevent excessive 97 7 Improving the numerical efficiency (a)(b) Figure 7.3: Third step of the modified subdivision algorithm for M “ 10.(a) Within each box B ˆ P B ˆ 3 choose M test points according to(7.3)–(7.6), where we mark x P ϕX 2 by a red, x P X ϕ 2 by a green, x P X 2 by a blue and x 3 by a black ˝ .(b) Corresponding image points are marked by a ˚ of the same color. Here, we need only to evaluate the CDS for the black and green test points. The boxes B 2 and B 4 are kept. memory usage, for each box B ˆ P B ˆ we delete all unnecessary points at random such that only M points are left. We conclude this section with an example. Example 7.1.3. In this example we compare the subdivision method introduced in Section 5.2(cf. Algorithm 5.1) with the subdivision method that uses our modified selection step. To this end, we consider the Mackey-Glass delay differential equation u 9 p t q“ β u p t ´ τ q 1 ` u p t ´ τ q η ´ γu p t q ,(7.8) where β “ 2, γ “ 1, η “ 9. 65 and τ “ 2(cf. Section 6.1.3). Since we are only interested in the performance of the modified selection step, we choose k “ 3 and Q “ r 0, 1. 5 s 3 , i.e. we compute a three-dimensional projection of the relative global attractor A Q . In order to speed up the convergence towards the attractor A, we set the iteration exponent m “ 10(cf.(5.6)). Both algorithms terminate when diam p B q“ 1 2 8 diam p Q q . This stopping criterion is realized by 24 RGA steps. We discretize each box B ˆ P B ˆ , “ 1,..., 24, with M “ 100 randomly chosen test points. In Figure 7.4 we compare both algorithms regarding to the overall computational time as well as the time spent 98 7.2 Development of a sequential procedure for the evaluation of the CDS. All other remaining functions like the generation of new test points x or marking all boxes that are hit by ϕ p x q are summarized in’ remaining functions’. Observe that by the realization of the modified selection Figure 7.4: Comparison of the’classical’ subdivision algorithm(cf. Algorithm 5.1 and in particular(5.7)) for the computation of the embedded attractor of the Mackey-Glass equation(7.13) with the algorithm presented in Section 7.1, where the selection step has been modified. For the iteration exponent m “ 5 we achieve a speed up by a factor of approximately 4. step, we can decrease the overall time of our modified subdivision algorithm even more by choosing an iteration exponent m “ 5. This is justified since we only use those test points x to check the condition in the modified selection step(7.7) that are evaluated by the CDS twice, and hence, by choosing m “ 5 this is comparable to the classical selection step with iteration exponent m “ 10. Comparing to Algorithm 5.1 the overall computational time for the approximation of the embedded attractor is decreased by a factor of approximately 4. However, in Figure 7.5 we see that after 24 RGA steps we already need 820. 2 MB RAM for the storage of preimage-image informations and that the memory usage grows exponentially. This is the main drawback of our modified subdivision algorithm. 7.2 Development of a sequential procedure The choice of the embedding dimension k is very crucial in order to guarantee an one-to-one image of the finite dimensional attractor A Ă Y. A good approximation of the upper box-counting dimension d B p A; Y q is not always available. Although our 99 7 Improving the numerical efficiency Figure 7.5: Memory usage of the subdivision method using the modified selection step for the Mackey-Glass equation(7.13). set-oriented techniques allow us to compute an approximation of the box-counting dimension, we first have to do some RGA steps. By using the box covering obtained via the subdivision method, we can estimate the dimension of the embedded attractor. We note that it may occur that after RGA steps we realize that the chosen embedding dimension is too small. As already discussed in the previous section, the selection step of the subdivision method can be very expensive. Hence, choosing a new embedding dimension and start the whole subdivision algorithm anew would decrease our computational efficiency significantly. In this section, we will present a sequential procedure which adaptively increases the embedding dimension without starting the subdivision algorithm anew. Roughly speaking, if we realize after some RGA steps that the embedding dimension k needs to be increased, instead of beginning anew, we will utilize the existing result as a starting point for the consecutive computation. A flow chart of this approach is shown in Figure 7.6. The general idea of our sequential procedure can be described as follows. First, we start with an initial embedding dimension k and with a sufficiently high number (say) of RGA steps, in order to get a coarse box covering B of the relative global attractor A Q . Then, our main loop begins. The sequential procedure terminates, if the diameter of our boxes fulfill the stopping criterion(cf. item 3 of Algorithm 5.1). This can be realized through a sufficiently high number of RGA steps M ą . If the 100 7.2 Development of a sequential procedure Figure 7.6: Flow chart of the sequential procedure. After subdivision and selection steps in the initial embedding dimension k, we check if k is bigger than twice the box-counting dimension of our current box covering Q. If this is not the case, we increase k, update our box collection B and proceed with the main loop. Otherwise, we make another RGA step. stopping criterion is not fulfilled, we check if our initially chosen embedding dimension is sufficiently large. To this end, we approximate the box-counting dimension of A by computing the box-counting dimension of Q(cf.(5.2)) and check if the embedding dimension k fulfills k ą 2 d box p Q q (cf. Definition 3.2.2). Note that we here assume that the thickness exponent of the invariant set A is equal to zero. However, if this is not the case, we can always make a worst case estimation of the embedding dimension, since the thickness exponent is bounded from above by the upper box-counting dimension(cf. Lemma 3.3.3). 101 7 Improving the numerical efficiency Hence, we have to check if the embedding dimension k fulfills k ą 2 ` 1 ` d box p Q q ˘ d box p Q q . In what follows, we assume that the thickness exponent is equal to zero. If k is bigger than twice the box-counting dimension of Q, we proceed with another RGA step. Otherwise, we start the update procedure(cf. Figure 7.6). If the embedding dimension is too small, we not only have to increase the embedding dimension, but we also have to construct a new box covering in dimension K ą k, where K “ r 2 d box p Q q s . By virtue of our numerical realization of the subdivision step, where we subdivide each box by bisection with respect to the j-th coordinate and where j is varied cyclically, we also have to update the current and final number of RGA steps and M, respectively(cf. Section 2.2). We then compute a new box covering in the embedding dimension K, perform one RGA step and increase by one. This completes one iteration in the main loop. The crucial point in this procedure is the computation of the new box-covering B in the embedding space R K . We will use those points that already have been evaluated by the CDS ϕ: R k Ñ R k in the embedding dimension k and we will add additional K ´ k observations. For the particular realization we will make use of additional numerical information from the evaluation of the map Φ that is discarded by the observation map R. This approach will in particular depend on the numerical realization of the map E and we will show how to use this procedure in the context of delay and partial differential equations. In what follows, we denote the CDS constructed in the embedding dimension k by ϕ p k q “ R k ˝ Φ ˝ E k ,(7.9) and whenever k has to be increased, we will denote the new embedding dimension by K and the corresponding CDS by ϕ p K q . 7.2.1 Computation of new observations After increasing the embedding dimension to K ą k the crucial step in the sequential procedure is the update of the box covering obtained by the subdivision method. As a first step, we have to generate appropriate observations in the higher-dimensional embedding space R K by using numerical information already obtained during the subdivision process for the embedding dimension k. In particular, this step is based 102 7.2 Development of a sequential procedure on the numerical realization of the CDS ϕ for delay and partial differential equations (cf. Chapter 6). Delay-embedding coordinates In this section we will show how to construct new observations in the case of delayembedding coordinates. To this end, let us consider the general delay differential equation(DDE) of the form y 9 p t q“ g p y p t q , y p t ´ τ p y qqq , 0 ď t ď t f , y p t q“ φ p t q , t ď 0, (7.10) where y p t q P R n and g: R n ˆ R n Ñ R n . Here, τ p y q ą 0 denotes the state dependent time delay, where we again assume that an upper bound τ ¯ ą 0 exists, i.e. 0 ă τ p y q ď τ ¯ for all y P R n . In this section, we will only focus on scalar equations( n “ 1) since the numerical realization of the sequential procedure for systems of DDEs(i.e. n ą 1) is a straightforward extension to our approach presented next. Thus, following Section 6.1, we use the observable f p u q“ u p´ τ ¯ q and therefore, the observation map R k : Y Ñ R k (cf.(6.5)) is given by R k p u q“ p u p´ τ ¯ q , Φ p u qp´ τ ¯ q ,..., Φ k ´ 1 p u qp´ τ ¯ qq T , where Φ “ Φ T k denotes the time- T k -map of(7.10) for T k “ τ ¯ {p k ´ 1 q . That is, for each evaluation of R k applied to a function u: r´ τ ¯, 0 s Ñ R we get k equally distributed function values within the interval r´ τ ¯, 0 s (cf. item 1 of Remark 6.1.1). Analogously, we define the observation map R K : Y Ñ R K , where R K p u q“ p u p´ τ ¯ q , Φ p u qp´ τ ¯ q ,..., Φ K ´ 1 p u qp´ τ ¯ qq T (7.11) yields K ą k equally distributed function evaluations within the interval r´ τ ¯, 0 s . Here we set T K “ τ ¯ {p K ´ 1 q and Φ “ Φ T K in ϕ p K q . We will use the following idea in order to generate new observations: for each box B P B and each test point x P B that possesses additional numerical information obtained during the previous RGA steps, we use the numerical realization of the map E k described in Section 6.1.2 in order to generate initial functions u, 103 7 Improving the numerical efficiency i.e. u “ E k p x q for x P B. It is easy to see that by applying the observation map R K to each u we get new observations x ¯ “ R K p u q in the higher-dimensional observation space. A schematic illustration of this procedure is shown in Figure 7.7. Figure 7.7: Schematic illustration of the sequential procedure for delay-embedding coordinates, where k “ 3 and K “ 5. In the first step we make use of additional information obtained by R 3 to generate an initial function u “ E 3 p x q for x P R 3 . Then, we obtain function evaluations x ¯ P R 5 at 5 pairwise distinct points by using the observation map R 5 : Y Ñ R 5 . POD-coefficients In Section 6.2 we have presented a numerical realization of the CDS for partial differential equations(PDEs). Following Section 6.2 we assume that each function u p y, t q P Y can be approximated sufficiently well via S ÿ u p y, t q« x i p t q Ψ i p y q , i “ 1 where we assume that S ě K and t Ψ i u denotes the POD-basis obtained in the initialization step(cf. Section 6.2.1). Here, the observation map R k is defined as the projection onto the first k POD-coefficients. Furthermore, we define the observation map R K as the projection onto the first K ą k POD-coefficients, where the first k observations are equal to those obtained by R k . Due to the numerical realization of E k and R k , we can directly use the additional information gained in the previous RGA steps, since S ÿ u p y, t q“ E k p x q“ x i p t q Ψ i p y q for x P B i “ 1 104 7.2 Development of a sequential procedure and f i p u q“ x u, Ψ i y“ x i for i “ k ` 1,..., K. 7.2.2 Creating a new box covering As a next step, we have to define a new box covering in the higher-dimensional space R K . Following Section 2.3 and Section 5.3, respectively, we first have to define a fine partition P s ˆ K of Q K Ă R K , where s ˆ depends on the number of RGA steps which already have been done for the embedding dimension k. To this end, we choose Q K as follows: for delay-embedding coordinates the box Q k is defined by Q k “ r q 1 , q 2 s k , where q 1 , q 2 P R . Hence, we set Q K “ r q 1 , q 2 s K . However, if we use POD-coefficients as observables, we set Q K “ Q k ˆ r a 1 , b 1 s ˆ r a 2 , b 2 s ˆ ... ˆ r a K ´ k , b K ´ k s , where a i , b i P R have to be chosen suitably(see item 2 of Remark 7.2.1). Then we mark those boxes in the partition P s ˆ K of Q K that are hit by the new observations x K P R K , where x K “ p R K ˝ E k qp x k q , for all x k P B and all B P B Ă R k .(7.12) Remark 7.2.1. 1. Following the numerical realization of the continuous map E k , we only use points x k P B, B P B, for which there exist a x ˆ k P B ˆ , B ˆ P B ´ 1 , such that x k “ ϕ p k q p x ˆ k q . Observe that for those points the identity p E k ˝ R k qp u q , for all u “ E p x k q , holds at least approximately. 2. In order to compute a i , b i P R for i “ 1,..., K ´ k, we make use of the information already obtained during the previous RGA steps in the embedding dimension k. More precisely, we set a i “ min t f k ` i p u qu ´ δ, b i “ max t f k ` i p u qu` δ, where δ ą 0 and u “ E p x k q for all x k P B, B P B(cf. item 1 of Remark 7.2.1). 105 7 Improving the numerical efficiency It remains to discuss how the partition P s ˆ K of Q K is defined. We note that the numerical realization of P s ˆ K is based on the numerical realization of the subdivision step, where we subdivide each box by bisection with respect to the j-th coordinate and where j is varied cyclically. Let us assume that k P N and Q k Ă R k . Then the partition P sk of Q k for s P N is defined by ď B “ Q k , with B P P sk diam p P sk q“ diam p Q k q{ 2 s ¯ , where s ¯ “ t s { k u , i.e. the diameter of all boxes will be decreased by a factor of 2 after k subdivision steps. However, those subdivision steps are only done virtually and we do not store the boxes B of the partition P sk . Using delay-embedding coordinates, we choose s ˆ such that diam p P s q“ diam p P ˆ s ˆ q holds. Using POD-coefficients, we set s ˆ “ s ´ mod p s, k k q ¨ K ` mod p s, k q . We conclude this section with the following example. Example 7.2.2. Let us consider the Mackey-Glass DDE u 9 p t q“ β u p t ´ τ q 1 ` u p t ´ τ q η ´ γu p t q ,(7.13) where β “ 2, γ “ 1, η “ 9. 65 and τ “ 2(cf. Section 6.1.3). Without a priori knowledge of d B p A; Y q , we choose the initial embedding dimension k “ 3, the maximum number of RGA steps M “ 5 ¨ k and start with “ 9 RGA steps. In Figure 7.8 we show the dependency of the box-counting dimension and the number of RGA steps. In comparison, we also show the box-counting dimension after each RGA step computed by Algorithm 5.1 for k “ 6. The algorithm using the sequential procedure described above terminates after 21 RGA steps, whereas the classical algorithm needs 30 RGA steps. 7.3 Koopman operator based continuation step In this chapter we will introduce a modified continuation step for the computation of embedded unstable manifolds. As already discussed in the previous sections, 106 7.3 Koopman operator based continuation step Figure 7.8: Comparison of the box-counting dimension of Q for the sequential procedure and the computation carried out with Algorithm 5.1 for k “ 6. The sequential procedure starts with embedding dimension 3, after “ 9 RGA steps the dimension is increased to 5(update to “ 15), after one more step it is increased to 6(update to “ 19). The sequential procedure needs 21 steps in order to compute almost the same box covering as the classical algorithm with 30 RGA steps. the evaluation of the CDS ϕ or rather the evaluation of the underlying infinite dimensional dynamical system may be very expensive. In particular, when each box of our collection obtained by the subdivision or continuation method is discretized by a large number of test points, this results in expensive selection and continuation steps, respectively. For the subdivision method this problem was tackled in Section 7.1. However, for the continuation algorithm presented in Section 5.3 this approach is not applicable. In this section, we will introduce a new approach. Note that, by step 3 of Algorithm 5.2, i.e. !) C j p`q 1 “ B P P s ` : for B 1 P C j p q D x P B 1 such that ϕ p x q P B, we have to evaluate the CDS for each box and each test point x P B 1 in order to mark those boxes in the partition P s ` that are hit by ϕ p x q . Depending on the complexity of the underlying unstable manifold and how large p s ` q P N is, we may have to discretize each box with a(possibly very) large number of test points. As already pointed out, this may result in an expensive continuation step, resulting in a prohibitively large computational time of the continuation method. Similar to the idea of Section 7.1, we want to reduce the number of function evaluations of the CDS without decreasing the number of test points in each box. Here, the idea is to evaluate only few test points in each box via the CDS ϕ and then to use 107 7 Improving the numerical efficiency this information in order to create local reduced order models(ROM). The aim of model order reduction is to replace a large-scale system of, e.g. PDEs, by systems of substantially lower dimensions that have almost the same response characteristics. These ROMs can then be evaluated with a significant reduction of the computational effort(see e.g.[ASG01, BMS05, Pei17]). One possible approach to construct such a ROM is by means of the Koopman operator[Koo31]. This operator is a linear but infinite dimensional operator whose modes and eigenvalues, which are associated with a fixed oscillation frequency and growth/decay rate, capture the evolution of observables describing any, even nonlinear, dynamical system[RMB ` 09, TRL ` 14]. In the Koopman operator context the observables are often described as real valued functions on the state space. The spectral properties of the Koopman operator (see e.g.[Mez05, BMM12]) play a crucial role in analyzing the underlying infinite dimensional dynamical system and can, e.g. be used to analyze fluid flows[RMB ` 10, Mez13]. Furthermore, other application scenarios where Koopman operator theory can be applied are power system technologies[SMRH16] or optimal control of PDEs [BBPK16, PK17]. In what follows, we will introduce the Koopman operator and discuss how to efficiently compute numerical approximations via extended dynamic mode decomposition(EDMD)[WKR15, KKS16, KNK ` 18]. The convergence of EDMD towards the Koopman operator has recently been proven in[KM18]. Then we will show how to combine Koopman operator theory with our continuation method introduced in Section 5.3. Parts of the results in this section are contained in[ZPD18]. The author has made significant contributions to the results presented therein. 7.3.1 The Koopman operator In this section, the Koopman operator and its numerical approximation via EDMD are introduced. We consider the discrete deterministic dynamical system u j ` 1 “ Φ p u j q , j “ 0, 1,...,(7.14) where Φ: Y Ñ Y is a Lipschitz continuous operator on a Banach space Y(cf.(4.1) in Chapter 4). Moreover, we denote by f: Y Ñ R a real-valued Lipschitz continuous observable of the system. Given a vector space of observables F such that f ˝ Φ P F for every f P F, we define the Koopman operator K: F Ñ F by K f p u q“ f p Φ p u qq ,(7.15) 108 7.3 Koopman operator based continuation step where u P Y. Unlike Φ, which acts on u P Y, the Koopman operator K acts on functions of the vector space of observables, i.e. f P F. Therefore, the Koopman operator is infinite dimensional(even if Φ is finite dimensional), but it is also linear (even when Φ is nonlinear)[WKR15]. Let us denote by φ j : Y Ñ R the Koopman eigenfunctions and by µ j P C the corresponding Koopman eigenvalues of K, i.e. K φ j p u q“ µ j φ j p u q , j “ 1, 2,....(7.16) Moreover, let us consider a set of observables f i : Y Ñ R , for i “ 1,..., k, and let us assume that each f i can be written as a combination of the linearly independent eigenfunctions φ j , i.e. 8 ÿ f i p u q“ c j φ j p u q , j “ 1 (7.17) with c i P C . Then by(7.16) 8 ÿ p K f i qp u q“ µ j c j φ j p u q . j “ 1 (7.18) Furthermore, let us denote by R “ p f 1 ,..., f k q J the vector of observables. Then, analogously to(7.18) we obtain p K R qp u q“ »ř 8 j “ 1 — – µ j c j, 1 φ j p u q fi ... ffi fl “ 8 ÿ µ j φ j p u q »fi c j, 1 — – ... ffi fl “ 8 ÿ µ j φ j p u q v j , ř 8 j “ 1 µ j c j,k φ j p u q j “ 1 c j,k j “ 1 (7.19) where v j “ p c j, 1 ,..., c j,k q J (cf.[Mez05, RMB ` 10, KKS16]). Following[RMB ` 10, KKS16] we will refer to the vectors v j P R k as the Koopman modes of the map Φ, corresponding to the observable R. Remark 7.3.1. 1. Given the full-state observable g p u q“ u, the connection between the dynamical system Φ and the Koopman operator K is as follows: p K g qp u q“ g p Φ p u qq“ Φ p u q .(7.20) (see, e.g.[KKS16]). Thus, by using the Koopman eigenvalues µ j , eigenfunctions φ j and modes v j corresponding to the observable g, we can compute Φ p u q 109 7 Improving the numerical efficiency with the aid of the Koopman operator, i.e. 8 ÿ Φ p u q“ p K g qp u q“ µ j φ j p u q v j . j “ 1 2. Let F “ L 8 . Then the Koopman operator K: L 8 Ñ L 8 is the adjoint of the transfer operator P(cf. Remark 2.4.10), i.e. x P f, g y“ x f, K g y , where we denote by x¨ , ¨y the duality pairing between L 1 and L 8 functions [KKS16]. Next, we are interested in a data driven method that approximates the leading Koopman eigenfunctions, eigenvalues and modes from a data set of consecutive snapshots. Extended dynamic mode decomposition(EDMD) EDMD is a possible algorithm to approximate the Koopman operator, the Koopman eigenfunctions, eigenvalues and modes introduced in Section 7.3.1. Depending on the underlying dynamical system, we will sometimes not be able to observe the full state of the system, in particular, if it is infinite dimensional. Therefore, we will consider only a finite number of measurements(observations), given by x “ R p u q P R k [PK17]. EDMD requires a set of data, i.e. we assume that we are given snapshots of observed data pairs X “ r x 1 ,..., x M s , Y “ r y 1 ,..., y M s ,(7.21) where x i “ R p u i q and y i “ R p Φ p u i qq . Here, we do not assume that the data points u i lie on a single trajectory of(7.14). In order to approximate the Koopman eigenfunctions, eigenvalues and modes, we will in addition use a dictionary of observables [WKR15]. This approach is an extension of the classical dynamic mode decomposition(DMD)(cf.[Sch10, TRL ` 14, AM17]). In what follows, we will use the notation introduced in[KKS16]. Let us define the dictionary of observables by D “ t ψ 1 , ψ 2 ,..., ψ N u , where ψ i P F, i “ 1,..., N, are linearly independent basis functions, whose span we denote by F N Ă F, i.e. F N : “ span t ψ 1 ,..., ψ N u . 110 7.3 Koopman operator based continuation step Then, EDMD constructs a finite dimensional approximation K: F N Ñ F N of the Koopman operator K by solving the least-squares problem where min K ˆ P R N ˆ N } K ˆ J Ψ X ´ Ψ Y } 2 F .(7.22) Ψ X “ r Ψ p x 1 q ,..., Ψ p x M qs , Ψ Y “ r Ψ p y 1 q ,..., Ψ p y M qs and Ψ p x q“ r ψ 1 p x q ,..., ψ N p x qs J , i.e. Ψ X , Ψ Y P R N ˆ M (cf.[WKR15, KM18]). If Ψ p x q“ x, we obtain DMD as a special case of EDMD. An equivalent formulation of(7.22) is given by M min K ˆ P R N ˆ N ÿ i “ 1 } K ˆ J Ψ p x i q ´ Ψ p y i q} 22 and we denote the solution to(7.22) or(7.23) by (7.23) K J “ Ψ Y Ψ : X ,(7.24) where ¨ : denotes the Moore-Penrose pseudoinverse. The computation of(7.24) becomes computationally expensive for large M since we have to compute the pseudoinverse of the matrix Ψ X [WKR15, KKS16]. By using the relationship Ψ : X “ Ψ J X p Ψ X Ψ J X q : we can compute a solution to(7.22) or(7.23) more efficiently by K J “ AG : , where Let us denote by M A “ ÿ Ψ p x i q Ψ p y i q T , i “ 1 M G “ ÿ Ψ p x i q Ψ p x i q T . i “ 1 »fi ξ 1 Ξ “ — – ... ffi fl ξ N (7.25) (7.26) 111 7 Improving the numerical efficiency the matrix that contains all left eigenvectors of K J . Furthermore, we denote by λ j the corresponding eigenvalues. This allows us to approximate the eigenfunctions φ p x q“ p φ 1 p x q ,..., φ N p x qq J of the Koopman operator K(cf.[WKR15]) by φ p x q“ Ξ Ψ p x q .(7.27) Moreover, the full state observable g can be written as g p x q“ B Ψ p x q , where B P R k ˆ N is the corresponding coefficient matrix, i.e. N ÿ g i p x q“ b ij ψ j p x q . j “ 1 Then, by using(7.27) we obtain g p x q“ B Ψ p x q“ B Ξ ´ 1 φ p x q“ η φ p x q ,(7.28) where η “ B Ξ ´ 1 . The i-th column vector of η represents the Koopman mode v i . This finally allows us to evaluate the dynamical system using the Koopman eigenvalues, eigenfunctions and modes computed from data. Remark 7.3.2. 1. We note that the accuracy of the approximation described above strongly depends on the set of basis functions D “ t ψ 1 ,....ψ N u . In our numerical realization we will use monomials up to order two. However, more general basis functions like Hermite polynomials[BE53] or radial basis functions[Buh03] can be used[WKR15]. Among those two choices, the Hermite polynomials are the simplest and are best suited to problems defined on R k if the data in X is normally distributed. Recently, in[LDBK17] an iterative approximation algorithm, which couples EDMD with a trainable dictionary represented by an artificial neural network, has been introduced. By employing this idea from machine learning, the dictionary can effectively and efficiently be adapted to the problem at hand without the need to choose a fixed dictionary a priory. 2. Following[PK17], we only consider observations of the system x “ R p u q and therefore obtain a dynamical system for the observations only. In order to obtain the’true’ full state observable, we can set x i “ u i and y i “ Φ p u i q in (7.21). 112 7.3 Koopman operator based continuation step Convergence of EDMD In this section we will briefly review the main results of[KM18], where convergence of EDMD towards the Koopman operator has recently been proven. We denote by K N,M “ K J , K N,M : F N Ñ F N , the finite dimensional approximation of the Koopman operator obtained by EDMD. As above, we denote by N the number of basis functions in D “ t ψ 1 ,..., ψ N u and by M the number of measurements in (7.21). In order to state the two theorems required for the convergence, we first introduce the following two assumptions. Assumption 7.3.3. The basis functions ψ 1 ,..., ψ N are such that µ t x P Z | c J Ψ p x q“ 0 u“ 0, for all c P R N , where Z Ă R N and µ is a given probability distribution according to which x i “ R p u i q are drawn. This natural assumption ensures that the measure µ is not supported on a zero level set of a linear combination of the basis functions used, i.e. ψ 1 ,..., ψ N (cf.[KM18] for more details). Assumption 7.3.4. The following conditions hold: 1. The Koopman operator K: F Ñ F is bounded. 2. The observables ψ 1 ,..., ψ N defining F N are selected from a given orthonormal basis of F, i.e., p ψ i q 8 i “ 1 is an orthonormal basis of F. The first part of this assumption holds for instance when Φ is invertible(e.g. Φ is a flow on a Banach space), Lipschitz with Lipschitz inverse and µ is the Lebesgue measure on Z. By using the Gram-Schmidt process, the second part becomes nonrestrictive since any countable dense subset of F can be turned into an orthonormal basis[KM18]. The convergence of K N,M to K is now achieved in two steps. First, convergence of K N,M to K N is shown as the number of samples M tends to infinity, where K N is the projection of K onto F N . Then, convergence of K N to K is shown by taking the limit N Ñ 8 . Theorem 7.3.5([KM18], Theorem 2). If Assumption 7.3.3 holds, then we have with probability one for all φ P F N lim } K N,M φ ´ K N φ }“ 0, M Ñ8 113 7 Improving the numerical efficiency where } ¨} is any norm on F N . In particular lim } K N,M ´ K N }“ 0, M Ñ8 where } ¨} is any operator norm and lim dist p σ p K N,M q , σ p K N qq , M Ñ8 where σ p¨q Ă C denotes the spectrum of an operator and dist p¨ , ¨q the Hausdorff metric on subsets of C . In[PK17], a switching time optimization problem is approximated using Koopman operator based model reduction techniques, where under specific constraints and under both assumptions stated above the optimal solutions are identical. In order to show convergence, the authors use a summarized version of the main theorem in [KM18], i.e. the convergence of K N to K which is given as follows: Theorem 7.3.6([PK17], Theorem 2.5). Let Assumption 7.3.4 hold and define the L 2 p µ q projection of a function φ onto F N by P µN φ “ arg min } f ´ φ } L 2 p µ q . f P F N Then the sequence of operators K N P µN “ P µN K P µN converges strongly to K as N Ñ 8 . Throughout the remainder of this section, we will assume that the Assumptions 7.3.3 and 7.3.4 are satisfied. However, we note that in particular the boundedness of K does not hold for all dynamical systems. Combining the Koopman operator with the CDS We will now combine the Koopman operator approach with the CDS defined in Chapter 4, i.e., x j ` 1 “ ϕ p x j q , j “ 0, 1, 2,...,(7.29) where ϕ “ R ˝ Φ ˝ E. Let us denote by u j “ E p x j q the initial values constructed via the continuous map E: R k Ñ Y. One iteration of the CDS is then given by x j ` 1 “ R p Φ p u j qq . 114 7.3 Koopman operator based continuation step Figure 7.9: General approach of Section 7.3: Instead of evaluating x j via the CDS, where each time we also have to evaluate the underlying infinite dimensional dynamical system Φ, we use a local approximation of the Koopman operator via EDMD in order to compute x j ` 1 . Since R is a vector valued observable(cf. Section 7.3.1), we get the equivalent formulation x j ` 1 “ R p Φ p u j qq“ p K R qp u j q ,(7.30) i.e. one evaluation by the CDS is equivalent to one iteration of the initial function u j by the Koopman operator(cf.(7.15) and(7.19)). Using EDMD, we can compute an approximation of the Koopman operator K. Since the Koopman operator is linear, we can use it in order to iterate a large number of test points very fast in comparison to the evaluation of those points with the CDS(see Figure 7.9). Let us be more precise. In the p j ` 1 q -th continuation step, i.e. !) C j p`q 1 “ B P P s ` : D B 1 P C j p q and x P B 1 such that ϕ p x q P B(7.31) (cf. Algorithm 5.2) we have to mark all boxes B in the partition P s ` which are hit by the test points x P B 1 under ϕ. This step may become computationally expensive if the number of test points is large and/or the evaluation of Φ is expensive. Let us suppose that an approximation of the Koopman operator via EDMD is available, i.e. ϕ p x q« BK J Ψ p x q ,(7.32) where B denotes the projection matrix(cf.(7.28)) and x P R k . Then we can define 115 7 Improving the numerical efficiency a Koopman-operator based continuation step as follows: !) C j p`q 1 “ B P P s ` : D B 1 P C j p q and x P B 1 such that BK J Ψ p x q P B.(7.33) This modified continuation step allows us to compute C j p`q 1 very efficiently since the evaluation of the underlying infinite dimensional dynamical system is not required anymore. Next, we will show that the box coverings obtained by(7.31) and(7.33) are identical. Theorem 7.3.7. Let Assumptions 7.3.3 and 7.3.4 be satisfied. Then the box coverings C j p`q 1 obtained by(7.31) and(7.33), respectively, are identical. Proof. By Assumption 7.3.3 and 7.3.4, Theorems 7.3.5 and 7.3.6 yield convergence of EDMD to the Koopman operator. Consequently, we have BK J Ψ p x i q“ B Ψ p x i ` 1 q“ B Ψ p ϕ p x i qq“ g p ϕ p x i qq“ ϕ p x i q ,(7.34) where g p x q“ B Ψ p x q is the full state observable according to item 2 of Remark 7.3.2. Hence, by(7.34) the coverings obtained by(7.31) and(7.33) are identical. We note that in Theorem 7.3.7 the number of samples M(i.e. x i “ R p u i q for i “ 1,..., M) tends to infinity, where it is also assumed that the samples in state space u i P A are drawn either independently or ergodically from some measure µ. This allows us to at least approximate the Koopman operator for the CDS defined on the invariant set A k . However, by construction of the continuation method, we want to construct approximations of the Koopman operator locally, i.e. for each box of the current box collection C j p q . To this end, we will proceed as follows: first, we discretize each box B 1 P C j p q by a finite set of test points(say L). Then we evaluate M ! L test points via the CDS. We use this information in order to create snapshots of observed data pairs X and Y(cf(7.21)). Then we use EDMD to approximate a local Koopman operator K P R N ˆ N , i.e. a Koopman operator which approximates the dynamics of ϕ restricted to the box B 1 . Instead of using the CDS, we then evaluate the remaining L ´ M points via the Koopman operator. Remark 7.3.8. 1. The accuracy of the approximation of the local Koopman operator not only depends on the set of basis functions D, but also on the M test points chosen for the generation of the data pairs X and Y, respectively. We discretize each box by an outer grid, where we use three points in each dimension. This yields M “ 3 k test points, where k denotes the embedding dimension. 2. For the numerical realization of the modified continuation step we will additionally evaluate each point x r , r “ 1,..., M, by the CDS on an equidistant 116 7.3 Koopman operator based continuation step time-grid 0 “ t 0 ă t 1 ă ... ă t n “ T. Thus, for ϕ ω “ R ˝ Φ ω ˝ E, where ω “ T { n, we define the snapshot matrices X and Y as follows: X “ r x 1 , ϕ ω ` x 1 ˘ ,... ϕ p n ´ 1 q ω ` x 1 ˘ ,..., x M ,..., ϕ p n ´ 1 q ω ` x M ˘ s and Y “ r ϕ ω ` x 1 ˘ ,... ϕ nω ` x 1 ˘ ,..., ϕ ω ` x M ˘ ,..., ϕ nω ` x M ˘ s . Hence, in order to approximate ϕ p x q“ p R ˝ Φ ˝ E q , where Φ “ Φ T , we have to compute ϕ p x q« B p K J q n ` 1 Ψ p x q ,(7.35) 3. The decomposition of the Koopman operator into eigenvalues, eigenfunctions and modes allows us to compute future states(i.e. ϕ p x q ) via ϕ p x q« B p K J q n ` 1 Ψ p x q “ B Ξ ´ 1 Ξ p K J q n ` 1 Ψ p x q “ η Λ n ` 1 φ p x q , (7.36) where Λ “ diag p λ 1 ..., λ N q . In(7.32), the evaluation of a test point x via the Koopman operator is just an approximation of the evaluation by the CDS. By(7.35) and(7.36), respectively, we have to iterate the test points x up to p n ` 1 q times in order to get an approximation of ϕ p x q . If the approximation of the Koopman operator is poor, we may have to stop after m ă n ` 1 iterations. To this end, in the next section we will introduce a trust region, where iterations by the Koopman operator are valid as long as they stay in a small neighborhood of the trajectories computed via the CDS. 7.3.2 The trust region The aim of this section is to introduce a so-called trust region for the Koopman operator based continuation step discussed in the previous section. The trust region is a term often used in mathematical optimization(e.g.[NW06]), where the objective function is approximated by a model function which can easily be derived for e.g. one computation of a descend step. If the approximated objective function is sufficiently accurate within the trust region, then the region is expanded and otherwise it is contracted(e.g.[CGT00]). Similar ideas of the trust-region frameworks are used for managing the use of approximation models in optimization[ADLT98](see also [Fah01, RTV17] for POD based ROM). Since ROMs are sufficiently accurate only in 117 7 Improving the numerical efficiency a restricted zone around the point in decision variable space where they have been constructed, the ROM needs to be updated in a systematic manner over the course of optimization[AB13]. We will use this idea in our context, where we define the trust region as a region where the iterates via the approximated Koopman operator are valid, i.e. where they stay sufficiently close to the trajectories computed via the CDS. Following item 2 of Remark 7.3.8, we have to do n ` 1 iterations by the Koopman operator in order to approximate one step of the CDS. If the approximation of K is poor, this may result in trajectories in observation space that are far away from the ’real’ embedded unstable manifold, i.e. the embedded unstable manifold obtained by Algorithm 5.2. The more steps we can do by using the Koopman operator, the more efficient our modified continuation step becomes. Thus, it is crucial to compute the optimal number of Koopman iterations for each test point x w , w “ M ` 1,..., L. To this end, we will formulate the optimal number of Koopman steps as a solution of an optimization problem. Let us denote by X r “ t ϕ ωj p x r qu nj “ 0 , r “ 1,..., M, the discrete trajectories generated by the CDS, where ω “ T { n. Analogously, we denote by Y w “ t η Λ j φ p x w qu j ě 0 , w “ M ` 1,..., L, the discrete trajectories computed via the Koopman operator. Moreover, the position at a certain time instance i P N is denoted by X r,i and Y w,i , respectively(cf. item 2 of Remark 7.3.8). Observe that X r,i “ ϕ ωi p x r q and analogously Y w,i “ η Λ i φ p x w q . Then the minimal distance between one point y P R k and a discrete trajectory X r is given by D p y, X r q“ min t d p y, X r,i q| i P t 0,..., n uu ,(7.37) where d p¨ , ¨q is an arbitrary distance function in R k . This distance allows us to compute the maximal Koopman iteration for one test point x w p w “ M ` 1,..., L q as follows: max i i P N s.t. min D p Y w,m , X r q ă r Pt 1 ,...,M u @ m ď i, (7.38) i.e. the Koopman iteration stays within an-neighborhood of at least one discrete trajectory computed by the CDS. Note that we maximize over i P N , i.e. we allow more than n ` 1 Koopman iterations as long as they stay within an-neighborhood of the discrete trajectories X r , r “ 1,..., M. In Figure 7.10 we show a schematic illustration of our trust region approach, whereas in Figure 7.11 we show one step of our numerical implementation for the modified Wright DDE u 9 p t q“ ´ α ¨ u p t ´ 1 q ¨ r 1 ´ u 2 p t qs .(7.39) On the left side of Figure 7.11 we show a two-dimensional projection of the discrete trajectories computed by the CDS(green) as well as the trajectories computed by the local Koopman operator(black) and the covering we obtain the continua118 7.3 Koopman operator based continuation step tion algorithm after one step. On the right side we show a zoom into four boxes, where the-neighborhood of the trajectory points computed by the CDS is indicated. Figure 7.10: Schematic illustration of the trust region. The gray dots and circles represent the discrete trajectories for two different test points computed by the CDS as well as their respective-neighborhoods. The blue and red dots represent the Koopman iterations for one test point, where the color red indicates that the Koopman iteration is infeasible. In this specific case, we stop after four Koopman iterations. Remark 7.3.9. 1. Numerically, we solve the optimization problem(7.38) as follows: we start with i “ 1 and solve the underlying minimization problem min D p Y w, 1 , X r q ă r “ 1 ,...,M via a k-nearest neighbor algorithm[AMN ` 98](see also[Ber06, ZBMM06] for different application areas of the nearest neighbor searching algorithm). If the distance between the points Y w, 1 and their corresponding nearest neighbor in X r is smaller than ą 0 we accept the Koopman iteration. Otherwise, we stop the Koopman iteration for those points that become infeasible. Then we compute the next Koopman iterations(i.e. we compute Y w, 2 for all feasible s) and restart the procedure as described above. This yields the optimal number of Koopman iterations in the sense of our trust region approach. 2. The implementation of the k-nearest neighbor algorithm in MATLAB allows us to use a specific metric for the measurement of the distance, i.e. our metric 119 7 Improving the numerical efficiency Figure 7.11: One step of the modified continuation algorithm introduced in Section 7.3 for the modified Wright DDE(7.39). The green dots are discrete trajectory points computed by the CDS which are used to approximate the Koopman operator. The black dots are the Koopman iterations starting in the neighborhood of the unstable fixed point. A zoom into four boxes shows that they stay within the-neighborhood of the discrete trajectory points X r for r “ 1,..., M. d p¨ , ¨q in(7.37). Following an observation made in[AHK01], where the authors have shown that the meaningfulness of the L p norm worsens faster with increasing dimensionality for higher values of p, we choose the Manhattan distance metric(or L 1 metric) defined by k ÿ d p x, y q“| x i ´ y i | x, y P R k . i “ 1 In summary, the numerical realization of the continuation step(7.31) of Algorithm(5.2) is replaced by the following procedure: 7.3.3 Examples In this section, we present results of computations carried out for one DDE as well as one PDE. For the DDE, a comparison with the continuation method introduced in Section 5.3 is presented as well. 120 7.3 Koopman operator based continuation step Algorithm 7.1 Koopman operator based continuation method For j “ 0, 1, 2,... 1. Discretize each box B P C j p q by a finite set of test points x r P B, r “ 1,..., L. Choose M ! L test points and create snapshots of data pairs X and Y according to item 2 of Remark 7.3.8. In addition, store the iterations computed by the CDS in X r for r “ 1,..., L. 2. Approximate the Koopman operator K P R N ˆ N by EDMD using a dictionary of observables D “ t ψ 1 , ψ 2 ,..., ψ N u and the snapshot matrices X and Y, respectively. 3. Choose ą 0 and replace the continuation step(7.31) by ! C j p`q 1 “ B P P s ` : for B 1 P C j p q D x w P B 1 such that η Λ i φ p x w q P B, ) (7.40) where i P N according to(7.38), for w “ M ` 1,..., L. The modified Wright equation As a first example let us consider the modified Wright equation u 9 p t q“ ´ α ¨ u p t ´ 1 q ¨ r 1 ´ u 2 p t qs .(7.39) In[HL93] it has been shown that the stationary solution u 0 p t q” 0 of(7.39) undergoes a supercritical Hopf bifurcation at α “ π { 2. Thus,(7.39) possesses a stable periodic solution for α ą π { 2. We choose α “ 2, set k “ 3 and Q “ r´ 2, 2 s 3 . Further parameter values are shown in Table 7.1. The optimization problem(7.38) Table 7.1: Parameter values of the Koopman operator based continuation method for the modified Wright equation. Parameter Value Level of the partition P s Integration time Time step # trajectories computed by the CDS per box # points iterated via(7.36) s T ω M L ´ M 24 4 4 { 75 9, 27, 65 2000 121 7 Improving the numerical efficiency is solved for different values of “ a ¨ r, where r “ diam p C q{ 2 “ 0. 0078 and C P P s is the initial box containing the embedded unstable steady state of(7.39), i.e. R p u 0 q . Figure 7.12 shows one comparison of the embedded unstable manifold obtained by Algorithm 5.2(where each box is discretized by 2000 test points) and our new approach which utilizes Koopman operator theory. Figure 7.12: Comparison of the three-dimensional projection of the embedded unstable manifold of(7.39). Gray boxes correspond to the box covering obtained by Algorithm 5.2 and the transparent red boxes correspond to the box covering obtained by the Koopman operator approach( M “ 27 and “ r). Since we have an efficient implementation of the evaluation of the CDS for this particular DDE, the overall computational time via the Koopman operator approach is only faster for low values of M, i.e. M “ 9(cf. Figure 7.13). However, if we choose M too low then the Hausdorff distance h p Q CDS , Q K q between the covering Q CDS obtained by Algorithm 5.2 and the covering Q K obtained by the Koopman operator based approach is inferior to the Hausdorff distance h p Q CDS , Q K q , where Q K has been computed using higher values of M “ 27, 65(cf. Figure 7.13). We obtain the best result for M “ 27 and “ r, where the computational time is about 46% longer comparing to the computational time of Algorithm 5.2. The corresponding Hausdorff distance h p Q CDS , Q K q“ 0. 01562 “ 2 ¨ r means that the box covering obtained by the Koopman operator based continuation method is at most one box thicker than the covering obtained by Algorithm 5.2. We expect that the Koopman operator based approach will be much faster for dynamical systems where the evaluation of the underlying infinite dimensional dynamical system is very expensive. This expectation is justified by the fact that the Koopman iteration, which is very time consuming comparing to the evaluation of the CDS for this particular DDE, is independent of the underlying dynamical system and thus should have less influence on the overall computational time. 122 7.3 Koopman operator based continuation step Figure 7.13: Run-time comparison of Algorithm 5.2 and the Koopman operator based continuation algorithm for the modified Wright DDE(7.39) for different parameter values M and. The last column shows the number of boxes obtained by both algorithms. We obtain the best result for M “ 27 and “ r. The Hausdorff distance is shown in Table 7.2. A chemotaxis model The second example is a one-dimensional Keller-Segel type model for chemotaxis incorporating a logistic cell growth term[PH11] described by u t “ ∇ p D ∇ u ´ χu ∇ v q` ru p 1 ´ u q , v t “ ∆ v ` u ´ v. (7.41) Chemotaxis is a process that describes the movement of cells(or organisms) in response to the presence of a chemical signal substance inhomogeneously distributed in space. This can result in a variety of spatial patterns of different complexity. In [TW07, Win10] the existence of unique global weak solutions is shown, where r is assumed to be sufficiently large. However, in[TW07] it is also assumed that the spatial dimension does not exceed two. We denote by u p x, t q and v p x, t q the cell density and the chemoattractant concentration, respectively, at time t and location x P r 0, L s . Furthermore, we refer to D as the cell diffusion coefficient, χ as the chemotactic coefficient and r as the growth rate. When χ ą 0, we speak of the 123 7 Improving the numerical efficiency Table 7.2: Hausdorff distance h p Q CDS , Q K q between the covering Q CDS obtained by Algorithm 5.2 and the covering Q K obtained by the Koopman operator based continuation method for r “ 0. 0078 for the modified Wright DDE(7.39). M Hausdorff distance 9 3 r 0.03494 9 2 r 0.03827 9 r 0.04419 9 r { 2 0.04688 27 3 r 0.02706 27 2 r 0.0221 27 r 0.01562 27 r { 2 0.0221 65 3 r 0.02706 65 2 r 0.0221 65 r 0.01562 65 r { 2 0.0221 so-called chemoattraction, where cells exhibit a tendency to move toward higher signal concentrations. Conversely, for χ ă 0 we speak of chemorepulsion, where cells prefer to move away from the signal[Win10]. Related classes of(7.41) can be found in[HP09]. Following[PH11] we assume zero-flux(Neumann) boundary conditions, i.e. B u B x “ B v B x “ 0, for x “ 0 and x “ L. We choose the parameter regime D “ r “ 1, χ “ 5. 6 and L “ 10. This yields spatio-temporal periodicity(cf. Figure 7.14) for initial functions near the unstable steady state p u p x, 0 q , v p x, 0 qq“ p 1, 1 q . (a) u p x, t q (b) v p x, t q Figure 7.14: Direct numerical simulation of(7.41) for D “ r “ 1, χ “ 5. 6 and L “ 10 for an initial function near the unstable steady state p u p x, 0 q , v p x, 0 qq“ p 1, 1 q . 124 7.3 Koopman operator based continuation step We use this coupled data in order to compute the POD-basis according to Section 6.2 for u and v, respectively. Moreover, following the idea introduced in[NAM ` 03], where a Karhunen-Loéve decomposition for the viscous incompressible flow around a circular cylinder incorporating the so-called shift-mode was presented, we will also incorporate an additional basis function, that is, the constant function in u and v. Then, we use a modified Gram-Schmidt algorithm[Bjö94] in order to compute an orthonormal POD-basis. The corresponding POD-coefficients are then our observations for this particular PDE. In Figure 7.15, we show the first 4 POD-modes for u and v, respectively. (a) u p x, t q (b) v p x, t q Figure 7.15: The first four(decoupled) POD-modes for u and v obtained by the singular value decomposition of the snapshot matrix illustrated in Figure 7.14. Next, we will show results obtained by the Koopman operator based continuation method for two different embedding dimensions and we will compare them to box coverings obtained via long-term simulations. First, we choose the embedding dimension k “ 4 and the initial box Q “ r 13, 21 s ˆ r´ 4, 4 s ˆ r´ 8, 8 s ˆ r´ 8, 8 s . In Table 7.3 we show further parameter values used in our Koopman operator based continuation method. The optimization problem(7.38) is solved for “ 2 ¨ diam p C q , Table 7.3: Parameter values for the Koopman operator based continuation method for the chemotaxis model( k “ 4). Parameter Value Level of the partition P s Integration time Time step # trajectories computed by the CDS per box # points iterated via(7.36) Spatial discretization for u and v s T ω L M ´ L N 30 8 0. 1 81 5000 200 125 7 Improving the numerical efficiency Figure 7.16: Different views of a three-dimensional projection of the embedded unstable manifold of(7.41) for k “ 4. Gray boxes show the box covering obtained by long-term simulations starting from a small neighborhood of the initial box C. The transparent orange boxes show the box covering obtained by the Koopman approach. where C P P s is the initial box containing the embedded unstable steady state of (7.39). In Figure 7.16 we show a three-dimensional projection of the embedded unstable manifold onto the second, third and fourth POD-mode obtained by the Koopman operator based continuation method as well as a box covering obtained by long-term simulations. For the latter, we have chosen the integration time T “ 100. Furthermore, we start the long-term simulations in a small neighborhood of the box C( 16 boxes in total, each discretized by 3000 test points). Observe that the embedded unstable manifold obtained by the Koopman operator based approach is thicker than the covering obtained by long-term simulations. This is due to the fact that in each continuation step, where initial functions u “ E p x q have to be generated, the approximation error } Φ m p E p x ˆ qq ´ u } , where E p x ˆ q are initial functions generated in the previous continuation step, is too large(cf. Section 6.2.2). Since the box-counting dimension of the covering obtained by Algorithm 7.1 is « 2. 662, we expect that we get better results if we increase the embedding dimension to k “ 6. For the embedding dimension k “ 6 we choose Q “ r 13, 21 s ˆ r´ 4, 4 s ˆ r´ 8, 8 s 4 . Further parameter values are shown in Table 7.4. Again, we solve the optimization problem(7.38) for “ 2 ¨ diam p C q , where C P P s is the initial box containing the embedded unstable steady state of(7.39). In Figure 7.17 we show a three-dimensional projection of the embedded unstable manifold onto the second, third and fourth POD-mode obtained by the Koopman operator based continuation method as well 126 7.3 Koopman operator based continuation step Table 7.4: Parameter values for the Koopman operator based continuation method for the chemotaxis model( k “ 6). Parameter Value Level of the partition P s Integration time Time step # trajectories computed by the CDS per box # points iterated via(7.36) Spatial discretization for u and v s T ω L M ´ L N 40 8 0. 1 129 5000 200 as a box covering obtained by long-term simulations. For the covering obtained by long-term simulations we choose the integration time T “ 100. Here, we start the long-term simulations in a small neighborhood of the box C( 64 boxes in total, each discretized by 2000 test points). Figure 7.17: Three-dimensional projection of the embedded unstable manifold of(7.41) for k “ 6. Gray boxes show the box covering obtained by long-term simulations starting from a small neighborhood of the initial box C. The transparent orange boxes show the box covering obtained by the Koopman approach. Though we have chosen 2000 test points the box covering obtained by direct simulation has a lot of holes. This is the main drawback of using long-term simulations combined with our set-oriented methods. The number of test points has to be sufficiently large. Observe that the boxes that are obtained by long-term simulations are covered by the Koopman operator based continuation method which yields a nice covering of the embedded unstable manifold. Furthermore, in Figure 7.18 we show a comparison of a three-dimensional projection of the coverings obtained by 127 7 Improving the numerical efficiency Algorithm 7.1 for k “ 4 and k “ 6, respectively. By using more POD-modes for the Figure 7.18: Comparison of a three-dimensional projection of the embedded unstable manifold of(7.41) for k “ 4 and k “ 6. Gray boxes show the box covering obtained by Algorithm 7.1 for k “ 4 whereas the transparent orange boxes show the box covering for k “ 6. initial functions near the unstable steady state, we gain more information about the embedded unstable manifold. Thus, the box collection obtained for k “ 4 is just a subset of the box collection obtained for k “ 6. This shows that it is very important to choose the embedding dimension sufficiently large! 128 8 Conclusion and outlook One central goal in the analysis of dynamical systems is the characterization of long term behavior of the system state. For finite dimensional dynamical systems there exist various algorithms that allow to approximate, e.g. unstable manifolds or the global attractor, that is, an invariant set that attracts all trajectories of the dynamical system. Among other methods, so-called set-oriented numerical techniques have been developed over the last two decades. The basic idea is to cover the objects of interest by outer approximations which are created via subdivision techniques. However, if the system’s states are infinite dimensional, and thus, discretized in a high-dimensional space, those techniques are no longer feasible. For infinite dimensional dynamical systems possessing a finite dimensional inertial manifold it suffices to study the dynamics on the inertial manifold, which can be described by an appropriate finite dimensional dynamical system, e.g. obtained via a Galerkin expansion. Although it is possible to fix the dimension of the reduced order model (ROM), an a priori identification of determining modes for the inertial manifold is needed. In this thesis, we have presented a new approach, where we have combined setoriented techniques with infinite dimensional embedding results. By using a sufficiently large number of observations for the infinite dimensional states, which depends on the upper box-counting dimension and the thickness exponent of the invariant set A, we are able to compute a one-to-one image of A in a finite dimensional observation space. To this end, we have constructed a finite dimensional dynamical system, the core dynamical system, which allows us to use set-oriented techniques for the approximation of embedded invariant sets. The resulting embedded invariant sets are then one-to-one images of A. We note that an a priori identification of determining modes for the ROM is not needed anymore. 8.1 The core dynamical system Chapter 3 builds our theoretical foundation for the analysis of the long term behavior of infinite dimensional dynamical systems that possess a finite dimensional invariant set. The main theorem of Section 3.3.3 states that it is possible to compute a one-to-one image of a compact finite dimensional invariant set A of an infinite dimensional dynamical system. More precisely, the set A is mapped into a finite 129 8 Conclusion and outlook dimensional observation space by using an appropriate observation map R. R p A q is then a one-to-one image of A as long as the number of observations is sufficiently large. By using the observation map R as well as its inverse, in Chapter 4 we have constructed a finite dimensional dynamical system, the core dynamical system (CDS) ϕ, on the observation space using Dugundji’s extension theorem. By construction, the dynamics of ϕ on R p A q is topologically conjugate to that of Φ on A, where we denote by Φ the time- T-map of the underlying infinite dimensional dynamical system. 8.2 Set-oriented techniques for embedded invariant sets The CDS allows us to approximate the invariant sets in a finite dimensional space. However, due to Dungundji’s extension theorem, the CDS is just continuous. Since the classical convergence results of the set-oriented techniques hold only for finite dimensional discrete dynamical systems that are homeomorphisms, we had to extend the theory to the continuous case. This is addressed in Chapter 5. We then showed how to approximate embedded attractors or embedded unstable manifolds by using appropriate extensions of the set-oriented methods, i.e. the subdivision and continuation method. The general numerical approach we have presented in this thesis is applicable to infinite dimensional dynamical systems described by a Lipschitz continuous operator on a Banach space. However, in this thesis we have restricted our attention to delay differential equations with(small) state dependent time delay and dissipative dynamical systems modeled by partial differential equations. In Chapter 6 we have presented numerical realizations of the CDS for each of both classes of differential equations and showed the efficiency of our numerical methods for several infinite dimensional dynamical systems that possess a low-dimensional invariant set. For delay differential equations, we have also approximated embedded invariant measures and discussed how these measures can be interpreted in order to give a statistical description of the infinite dimensional states. 8.3 Improving the numerical efficiency Due to our numerical realization of both the selection and the continuation step, we have to discretize each box in the box collection by a finite number of test points. Each point then has to be evaluated by the CDS. Due to the construction of the CDS, for each point we have to solve the underlying infinite dimensional dynamical system. 130 8.4 Future work Hence, the overall computational time not only depends on the complexity of the underlying invariant set(i.e. the upper box counting dimension and the thickness exponent of A), but also on the efficient evaluation of the selection and continuation step, respectively. Therefore, in Chapter 7 we have presented modifications of the selection and the continuation step in order to improve the numerical efficiency of the set-oriented techniques discussed throughout this thesis. For the selection step, we have used information obtained during the subdivision procedure in order to decrease the number of CDS evaluations by a factor of approximately two. Furthermore, we have presented a sequential procedure that adaptively increases the embedding dimension if it has been chosen too low initially. Here, instead of beginning the subdivision method anew, the existing results have been utilized as a starting point for the consecutive computation. Finally, we have developed a Koopman operator based continuation method. Within each box that has been marked during the continuation procedure we have used a small number of test points in order to compute discrete trajectories via the CDS. Then, we have computed local Koopman operators via extended dynamic mode decomposition. The approximation of the Koopman operator is given by a finite dimensional matrix. Hence, this allows us to iterate a large number of test points by simple matrix-vector multiplications. However, those iterations are just approximations of the evaluation by the CDS. Thus, we have also introduced a trust region, where the iterations by the local Koopman operator are only feasible if they stay within an-neighborhood of the discrete trajectories computed by the CDS. 8.4 Future work The results presented in this thesis contribute to the numerical analysis of long term behavior of infinite dimensional dynamical systems. We have combined setoriented techniques with infinite dimensional embedding results which allow us to approximate finite dimensional invariant sets of infinite dimensional dynamical systems. However, our current approach only gives us topological information about the invariant sets of interest, since the observation map due to Hunt& Kaloshin or Robinson does not provide any information about the distance between two points in observation space and the corresponding distance of functions in state space. For finite dimensional dynamical systems a geometry preserving delay-coordinate map has recently been introduced which yields an embedded set that also provides geometrical information[EYWR18]. If possible, an extension to the infinite dimensional context could yield more information about the embedded invariant sets of infinite dimensional dynamical systems. 131 8 Conclusion and outlook Due to the realization of the set-oriented techniques presented throughout this thesis, our methods are restricted to invariant sets of dimension smaller than four. Though we have presented some modifications of the selection and continuation step, respectively, which improve the numerical efficiency of our set-oriented methods, more work has to be done in order to tackle problems that possess invariant sets of dimension greater or equal to four. In particular, since the computation of the selection step is still expensive, we do not recommend the subdivision method presented in Section 7.1 for the computation of attractors of PDEs. In Section 7.3 we have shown how to modify the continuation step in order to decrease the number of function evaluations of the CDS significantly. Here, we have used an approximation of the Koopman operator which is a linear operator, i.e. a matrix in the numerical realization, to iterate a large number of test points very efficiently. Hence, we can use a similar approach for the selection step of the subdivision algorithm, where we approximate the dynamics in observation space by local Koopman operators. The idea comes from the field of mathematical optimization, where the function is approximated by reduced order models which are only valid in a small neighborhood of the corresponding point in decision space. Thus, by defining local Koopman operators, we could switch from one Koopman operator to another, provided that we switch the trust region. However, it should be desirable to use concrete error estimations for the trust region in which the approximated Koopman operator is valid. Here, more work has to be done yet, since the error estimates introduced in[KM18] are only of theoretical nature. Furthermore, it is not clear where a local Koopman operator in observation space should be constructed. One idea is to use clustering techniques from the computation of Lagrangian coherent structures(e.g.[PGS17]) in order to identify regions where the dynamics in observation space behaves similar. In these regions local Koopman operators can then be approximated(see Figure 8.1(a)). (a)(b) Figure 8.1: (a) Identified regions in the initial box Q, where the dynamics of the Kuramoto-Sivashinsky equation behaves similar.(b) Evaluation of each test point via the CDS. Test points in each region stay together. 132 Bibliography [AB13] A. Agarwal and L. T. Biegler. A trust-region framework for constrained optimization using reduced order modeling. Optimization and Engineering, 14(1):3–35, 2013. [ACST85] A. Arneodo, P. H. Coullet, E. A. Spiegel, and C. Tresser. Asymptotic chaos. Physica D: Nonlinear Phenomena, 14:327–347, 1985. [ADLT98] N. M. Alexandrov, J. E. Dennis, R. M. Lewis, and V. Torczon. A trustregion framework for managing the use of approximation models in optimization. Structural Optimization, 15(1):16–23, 1998. [AHD07] O. Arino, M. L. Hbid, and E. A. Dads. Delay Differential Equations and Applications: Proceedings of the NATO Advanced Study Institute held in Marrakech, Morocco, 9-21 September 2002, volume 205. Springer Science & Business Media, 2007. [AHK01] C. C. Aggarwal, A. Hinneburg, and D. A. Keim. On the surprising behavior of distance metrics in high dimensional spaces. In Database Theory – ICDT 2001, volume 1, pages 420–434. Springer, 2001. [AM17] H. Arbabi and I. Mezić. Ergodic theory, dynamic mode decomposition, and computation of spectral properties of the Koopman operator. SIAM Journal on Applied Dynamical Systems, 16(4):2096–2126, 2017. [AMN ` 98] S. Arya, D. M. Mount, N. S. Netanyahu, R. Silverman, and A. Y. Wu. An optimal algorithm for approximate nearest neighbor searching fixed dimensions. Journal of the ACM(JACM), 45(6):891–923, 1998. [ASG01] A. C. Antoulas, D. C. Sorensen, and S. Gugercin. A survey of model reduction methods for large-scale systems. Contemporary Mathematics, 280:193–220, 2001. [BBPK16] S. L. Brunton, B. W. Brunton, J. L. Proctor, and N. J. Kutz. Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control. PLoS ONE, 11(2):e0150171, 2016. [BE53] H. Bateman and A. Erdélyi. Higher transcendental functions, volume 2. McGraw-Hill Book Company, 1953. 133 Bibliography [Ber06] P. Berkhin. A survey of clustering data mining techniques. Grouping Multidimensional Data, 25:25–71, 2006. [BEW04] P. Brunovsky`, A. Erdélyi, and H. O. Walther. On a model of a currency exchange rate–local stability and periodic solutions. Journal of Dynamics and Differential Equations, 16(2):393–432, 2004. [BHL93] G. Berkooz, P. Holmes, and J. L. Lumley. The proper orthogonal decomposition in the analysis of turbulent flows. Annual Review of Fluid Mechanics, 25(1):539–575, 1993. [Bir31] G. D. Birkhoff. Proof of the ergodic theorem. Proceedings of the National Academy of Sciences, 17(12):656–660, 1931. [Bjö94] Å. Björck. Numerics of gram-schmidt orthogonalization. Linear Algebra and Its Applications, 197:297–316, 1994. [BK86] S. Broomhead and G. P. King. Extracting qualitative dynamics from experimental data. Physica D: Nonlinear Phenomena, 20:217–236, 1986. [BMM12] M. Budišić, R. Mohr, and I. Mezić. Applied Koopmanism. Chaos: An Interdisciplinary Journal of Nonlinear Science, 22(4):047510, 2012. [BMS05] P. Benner, V. Mehrmann, and D. C. Sorensen. Dimension reduction of large-scale systems, volume 35. Springer, 2005. [Buh03] M. D. Buhmann. Radial basis functions: theory and implementations, volume 12. Cambridge university press, 2003. [But87] J. C. Butcher. The numerical analysis of ordinary differential equations: Runge-Kutta and general linear methods. Wiley-Interscience, 1987. [BZ13] A. Bellen and M. Zennaro. Numerical methods for delay differential equations. Oxford University Press, 2013. [Car12] J. Carr. Applications of centre manifold theory, volume 35. Springer Science& Business Media, 2012. [Cas89] M. Casdagli. Nonlinear prediction of chaotic time series. Physica D: Nonlinear Phenomena, 35(3):335–356, 1989. [CFNT88] P. Constantin, C. Foias, B. Nicolaenko, and R. Temam. Integral manifolds and inertial manifolds for dissipative partial differential equations. Springer-Verlag, 1988. [CFT85] P. Constantin, C. Foias, and R. Temam. Attractors representing turbulent flows. Memoirs of the American Mathematical Society, 1985. [CGT00] A. R. Conn, N. I. M. Gould, and P. L. Toint. Trust region methods. Society for Industrial and Applied Mathematics, 2000. 134 Bibliography [CH12] S. N. Chow and J. K. Hale. Methods of bifurcation theory, volume 251. Springer Science& Business Media, 2012. [Cha00] A. Chatterjee. An introduction to the proper orthogonal decomposition. Current Science, 78(7):808–817, 2000. [Chi03] C. Chicone. Inertial and slow manifolds for delay equations with small delays. Journal of Differential Equations, 190(2):364–406, 2003. [CL88] S. N. Chow and K. Lu. Invariant manifolds for flows in Banach spaces. Journal of Differential Equations, 74(2):285–317, 1988. [CLR13] A. Carvalho, J. A. Langa, and J. C. Robinson. Attractors for infinitedimensional non-autonomous dynamical systems, volume 182 of Applied Mathematical Sciences. Springer-Verlag, 2013. [CMRV05] T. Caraballo, P. Marın-Rubio, and J. Valero. Autonomous and nonautonomous attractors for differential equations with delays. Journal of Differential Equations, 208(1):9–41, 2005. [CR00] R. V. Culshaw and S. Ruan. A delay-differential equation model of hiv infection of CD4+ T-cells. Mathematical Biosciences, 165(1):27–39, 2000. [DDJS99] P. Deuflhard, M. Dellnitz, O. Junge, and C. Schütte. Computation of essential molecular dynamics by subdivision techniques. In Computational Molecular Dynamics: Challenges, Methods, Ideas, pages 98–115. Springer-Verlag, 1999. [DFJ01] M. Dellnitz, G. Froyland, and O. Junge. The algorithms behind GAIO — Set oriented numerical methods for dynamical systems. In Ergodic Theory, Analysis, and Efficient Simulation of Dynamical Systems, pages 145–174. Springer-Verlag, 2001. [DGHN88] C. R. Doering, J. D. Gibbon, D. D. Holm, and B. Nicolaenko. Lowdimensional behaviour in the complex Ginzburg-Landau equation. Nonlinearity, 1(2):279–309, 1988. [DGM ` 05] M. Dellnitz, K. A. Grubits, J. E. Marsden, K. Padberg, and B. Thiere. Set oriented computation of transport rates in 3-degree of freedom systems: the Rydberg atom in crossed fields. Regular and Chaotic Dynamics, 10(2):173–192, 2005. [DH96] M. Dellnitz and A. Hohmann. The Computation of Unstable Manifolds using Subdivision and Continuation. In Nonlinear Dynamical Systems and Chaos, pages 449–459. Springer, 1996. 135 Bibliography [DH97] M. Dellnitz and A. Hohmann. A subdivision algorithm for the computation of unstable manifolds and global attractors. Numerische Mathematik, 75:293–317, 1997. [DHJR97] M. Dellnitz, A. Hohmann, O. Junge, and M. Rumpf. Exploring invariant sets and invariant measures. CHAOS: An Interdisciplinary Journal of Nonlinear Science, 7(2):221–228, 1997. [DHZ16] M. Dellnitz, M. Hessel-von Molo, and A. Ziessler. On the computation of attractors for delay differential equations. Journal of Computational Dynamics, 3(1):93–112, 2016. [DJ99] M. Dellnitz and O. Junge. On the approximation of complicated dynamical behavior. SIAM Journal on Numerical Analysis, 36(2):491–515, 1999. [DJ06] M. Dellnitz and O. Junge. Set oriented numerical methods in space mission design. Modern Astrodynamics, 1:127–153, 2006. [DJK ` 05] M. Dellnitz, O. Junge, W. S. Koon, F. Lekien, M. W. Lo, J. E. Marsden, K. Padberg, R. Preis, S. D. Ross, and B. Thiere. Transport in dynamical astronomy and multibody problems. International Journal of Bifurcation and Chaos, 15(03):699–727, 2005. [DJL ` 05] M. Dellnitz, O. Junge, M. Lo, J. E. Marsden, K. Padberg, R. Preis, S. Ross, and B. Thiere. Transport of Mars-crossing asteroids from the quasi-Hilda region. Physical Review Letters, 94(23):231102, 2005. [DJM04] Sarah Day, Oliver Junge, and Konstantin Mischaikow. A rigorous numerical method for the global analysis of infinite-dimensional discrete dynamical systems. SIAM Journal on Applied Dynamical Systems, 3(2):117–160, 2004. [DKZ17] M. Dellnitz, S. Klus, and A. Ziessler. A set-oriented numerical approach for dynamical systems with parameter uncertainty. SIAM Journal on Applied Dynamical Systems, 16(1):120–138, 2017. [Doo60] J. L. Doob. Stochastic processes. John Wiley, 1960. [Dri68] R. D. Driver. On Ryabov’s asymptotic characterization of the solutions of quasi-linear differential equations with small delays. SIAM Review, 10(3):329–341, 1968. [DS88] N. Dunford and J. T. Schwartz. Linear Operators. Part I: General Theory. Wiley-Interscience, 1988. [Dug51] J. Dugundji. An extension of Tietze’s theorem. Pacific Journal of Mathematics, 1(3):353–367, 1951. 136 Bibliography [ER85] J. P. Eckmann and D. Ruelle. Ergodic theory of chaos and strange attractors. Reviews of Modern Physics, 57(3):617–656, 1985. [EYWR18] A. Eftekhari, H. L. Yap, M. B. Wakin, and C. J. Rozell. Stabilizing embedology: Geometry-preserving delay-coordinate maps. Physical Review E, 97:022222, 2018. [Fah01] Marco Fahl. Trust-region methods for flow control based on reduced order modelling. PhD thesis, University of Trier, 2001. [Fal13] K. Falconer. Fractal geometry: mathematical foundations and applications. John Wiley& Sons, 2013. [FD03] G. Froyland and M. Dellnitz. Detecting and locating near-optimal almost invariant sets and cycles. SIAM Journal on Scientific Computing, 24(6):1839–1863, 2003. [FHR ` 12] G. Froyland, C. Horenkamp, V. Rossi, N. Santitissadeekorn, and A. Sen Gupta. Three-dimensional characterization and tracking of an Agulhas ring. Ocean Modelling, 52-53:69–75, 2012. [FJK ` 88] C. Foias, M. S. Jolly, I. G. Kevrekidis, G. R. Sell, and E. S. Titi. On the computation of inertial manifolds. Physics Letters A, 131(7):433–436, 1988. [FK17] G. Froyland and P. Koltai. Estimating long-term behavior of periodically driven flows without trajectory integration. Nonlinearity, 30(5):1948– 1986, 2017. [FLS10] G. Froyland, S. Lloyd, and N. Santitissadeekorn. Coherent sets for nonautonomous dynamical systems. Physica D: Nonlinear Phenomena, 239:1527–1541, 2010. [FNST86] C. Foias, B. Nicolaenko, G. R. Sell, and R. Temam. Inertial manifolds for the Kuramoto-Sivashinsky equation and an estimate of their lowest dimension. IMA Preprints Series, 279, 1986. [FP09] G. Froyland and K. Padberg. Almost-invariant sets and invariant manifolds – connecting probabilistic and geometric descriptions of coherent structures in flows. Physica D, 238:1507–1523, 2009. [FR99] P. K. Friz and J. C. Robinson. Smooth attractors have zero’thickness’. Journal of Mathematical Analysis and Applications, 240(1):37–46, 1999. [FS87] J. D. Farmer and J. J. Sidorowich. Predicting chaotic time series. Physical Review Letters, 59(8):845–848, 1987. 137 Bibliography [GH13] J. Guckenheimer and P. Holmes. Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, volume 42. Springer Science & Business Media, 2013. [GP83] P. Grassberger and I. Procaccia. Measuring the strangeness of strange attractors. Physica D: Nonlinear Phenomena, 9(1-2):189–208, 1983. [GR70] G. H. Golub and C. Reinsch. Singular value decomposition and least squares solutions. Numerische Mathematik, 14(5):403–420, 1970. [GS84] P. Glendinning and C. Sparrow. Local and global behavior near homoclinic orbits. Journal of Statistical Physics, 35(5-6):645–696, 1984. [Hal64] J. H. Halton. Algorithm 247: Radical-inverse quasi-random point sequence. Communications of the ACM, 7(12):701–702, 1964. [Hal71] J. Hale. Functional Differential Equations, volume 3. Springer-Verlag, 1971. [Hal10] J. K. Hale. Asymptotic Behavior of Dissipative Systems, volume 25. American Mathematical Society, 2010. [Hen06] D. Henry. Geometric Theory of Semilinear Parabolic Equations, volume 840 of Lecture Notes in Mathematics. Springer-Verlag, 2006. [Hir12] M. W. Hirsch. Differential Topology, volume 33. Springer Science& Business Media, 2012. [HK99] B. R. Hunt and V. Y. Kaloshin. Regularity of embeddings of infinitedimensional fractal sets into finite-dimensional spaces. Nonlinearity, 12(5):1263–1275, 1999. [HL93] J. K. Hale and S. M. V. Lunel. Introduction to Functional Differential Equations, volume 99 of Applied Mathematical Sciences. Springer-Verlag, 1993. [HLBR12] P. Holmes, J. L. Lumley, G. Berkooz, and C. W. Rowley. Turbulence, Coherent Structures, Dynamical Systems and Symmetry. Cambridge University Press, 2012. [HNZ86] J. M. Hyman, B. Nicolaenko, and S. Zaleski. Order and complexity in the Kuramoto-Sivashinsky model of weakly turbulent interfaces. Physica D: Nonlinear Phenomena, 23(1-3):265–292, 1986. [HP09] T. Hillen and K. J. Painter. A user’s guide to pde models for chemotaxis. Journal of Mathematical Biology, 58(1-2):183–217, 2009. 138 Bibliography [Hsu87] C. S. Hsu. Cell-to-Cell Mapping: A Method of Global Analysis for Nonlinear Systems, volume 64 of Applied Mathematical Sciences. SpringerVerlag, 1987. [HSY92] B. R. Hunt, T. Sauer, and J. A. Yorke. Prevalence: a translationinvariant’almost every’ on infinite-dimensional spaces. Bulletin of the American Mathematical Society, 27(2):217–238, 1992. [Hun93] F. Y. Hunt. Monte carlo approach to the approximation of invariant measures. National Institute of Standards and Technology, NISTIR 4980, 1993. [JBC ` 07] D. Jacob, L. Bärring, O. B. Christensen, J. H. Christensen, M. de Castro, M. Deque, F. Giorgi, S. Hagemann, M. Hirschi, R. Jones, E. Kjellström, G. Lenderink, B. Rockel, E. Sánchez, C. Schär, S. I. Seneviratne, S. Somot, A. van Ulden, and B van den Hurk. An inter-comparison of regional climate models for europe: model performance in present-day climate. Climatic Change, 81:31–52, 2007. [JJK01] M. E. Johnson, M. S. Jolly, and I. G. Kevrekidis. The Oseberg transition: Visualization of global bifurcations for the Kuramoto–Sivashinsky equation. International Journal of Bifurcation and Chaos, 11(01):1–18, 2001. [JK17] O. Junge and I. G. Kevrekidis. On the sighting of unicorns: a variational approach to computing invariant sets in dynamical systems. Chaos: An Interdisciplinary Journal of Nonlinear Science, 27(6):063102, 2017. [JKT90] M. S. Jolly, I. G. Kevrekidis, and E. S. Titi. Approximate inertial manifolds for the Kuramoto-Sivashinsky equation: analysis and computations. Physica D: Nonlinear Phenomena, 44(1-2):38–60, 1990. [Jol89] M. S. Jolly. Explicit construction of an inertial manifold for a reaction diffusion equation. Journal of Differential Equations, 78(2):220–261, 1989. [Jun00] O. Junge. Rigorous discretization of subdivision techniques. In Proceedings of EQUADIFF, volume 99, pages 916–918. World Scientific, 2000. [KCAS05] N. Kannathal, M. L. Choo, U. R Acharya, and P. K. Sadasivan. Entropies for detection of epilepsy in EEG. Computer Methods and Programs in Biomedicine, 80(3):187–194, 2005. [KGPS16] S. Klus, P. Gelß, S. Peitz, and C. Schütte. Tensor-based dynamic mode decomposition. arXiv preprint arXiv:1606.06625, 2016. [KH97] A. Katok and B. Hasselblatt. Introduction to the Modern Theory of Dynamical Systems, volume 54. Cambridge University Press, 1997. 139 Bibliography [Kif86] Y. Kifer. General random perturbations of hyperbolic and expanding transformations. Journal d’Analyse Mathématique, 47(1):111–150, 1986. [KKS16] S. Klus, P. Koltai, and C. Schütte. On the numerical approximation of the Perron-Frobenius and Koopman operator. Journal of Computational Dynamics, 3(1):51–79, 2016. [KM13] V. Kolmanovskii and A. Myshkis. Introduction to the Theory and Applications of Functional Differential Equations, volume 463 of Mathematics and Its Applications. Springer Netherlands, 2013. [KM18] M. Korda and I. Mezić. On Convergence of Extended Dynamic Mode Decomposition to the Koopman Operator. Journal of Nonlinear Science, 28(2):687–710, 2018. [KNK ` 18] S. Klus, F. Nüske, P. Koltai, H. Wu, I. G. Kevrekidis, C. Schütte, and F. Noé. Data-Driven Model Reduction and Transfer Operator Approximation. Journal of Nonlinear Science, pages 1–26, 2018. [KNS90] I. G. Kevrekidis, B. Nicolaenko, and J. C. Scovel. Back in the saddle again: a computer assisted study of the Kuramoto-Sivashinsky equation. SIAM Journal on Applied Mathematics, 50(3):760–790, 1990. [KO99] B. Krauskopf and H. Osinga. Two-dimensional global manifolds of vector fields. Chaos: An Interdisciplinary Journal of Nonlinear Science, 9(3):768–774, 1999. [KOD ` 05] B. Krauskopf, H. M. Osinga, E. J. Doedel, M. E. Henderson, J. Guckenheimer, A. Vladimirsky, M. Dellnitz, and O. Junge. A survey of methods for computing(un)stable manifolds of vector fields. International Journal of Bifurcation and Chaos, 15(03):763–791, 2005. [Kol10] P. Koltai. Efficient approximation methods for the global long-term behavior of dynamical systems: Theory, algorithms and examples. PhD thesis, Logos Verlag, 2010. [Koo31] B. O. Koopman. Hamiltonian systems and transformation in Hilbert space. Proceedings of the National Academy of Sciences, 17(5):315–318, 1931. [KR04] I. Kukavica and J. C. Robinson. Distinguishing smooth functions by a finite number of point values, and a version of the Takens embedding theorem. Physica D: Nonlinear Phenomena, 196:45–66, 2004. [KT76] Y. Kuramoto and T. Tsuzuki. Persistent propagation of concentration waves in dissipative media far from thermal equilibrium. Progress of Theoretical Physics, 55(2):356–369, 1976. 140 Bibliography [KT05] A. K. Kassam and L. N. Trefethen. Fourth-order time-stepping for stiff pdes. SIAM Journal on Scientific Computing, 26(4):1214–1233, 2005. [Kua93] Y. Kuang. Delay Differential Equations: With Applications in Population Dynamics, volume 191. Academic Press, 1993. [LDBK17] Q. Li, F. Dietrich, E. M. Bollt, and I. G. Kevrekidis. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator. Chaos: An Interdisciplinary Journal of Nonlinear Science, 27(10):103111, 2017. [LLL ` 02] Y. C. Liang, H. P. Lee, S. P. Lim, W. Z. Lin, K. H. Lee, and C. G. Wu. Proper orthogonal decomposition and its applications – part I: Theory. Journal of Sound and Vibration, 252(3):527–544, 2002. [LM13] A. Lasota and M. C. Mackey. Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics, volume 97. Springer Science& Business Media, 2013. [Lor63] E. N. Lorenz. Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20(2):130–141, 1963. [Lor84] E. N. Lorenz. The local structure of a chaotic attractor in four dimensions. Physica D: Nonlinear Phenomena, 13(1-2):90–104, 1984. [LR09] E. Liz and G. Röst. On the global attractor of delay differential equations with unimodal feedback. Discrete& Continuous Dynamical Systems – A, 24(4), 2009. [May74] R. M. May. Biological populations with nonoverlapping generations: stable points, stable cycles, and chaos. Science, 186(4164):645–647, 1974. [MB04] I. Mezić and A. Banaszuk. Comparison of systems with complex behavior. Physica D: Nonlinear Phenomena, 197(1-2):101–133, 2004. [MBS93] T. M. Martinetz, S. G. Berkovich, and K. J. Schulten.’Neural-gas’ network for vector quantization and its application to time-series prediction. IEEE Transactions on Neural Networks, 4(4):558–569, 1993. [Mez05] I. Mezić. Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dynamics, 41(1):309–325, 2005. [Mez13] I. Mezić. Analysis of fluid flows via spectral properties of the Koopman operator. Annual Review of Fluid Mechanics, 45:357–378, 2013. [MG77] M. C. Mackey and L. Glass. Oscillation and chaos in physiological control systems. Science, 197(4300):287–289, 1977. 141 Bibliography [Mn81] R. Mañé. On the dimension of the compact invariant sets of certain non-linear maps. In Dynamical Systems and Turbulence, Warwick 1980, Lecture Notes in Mathematics, volume 898, pages 230–242. SpringerVerlag, 1981. [MP76] J. Mallet-Paret. Negatively invariant sets of compact maps and an extension of a theorem of cartwright. Journal of Differential Equations, 22(2):331–348, 1976. [MSR ` 97] K. R. Müller, A. J. Smola, G. Rätsch, B. Schölkopf, J. Kohlmorgen, and V. Vapnik. Predicting time series with support vector machines. In International Conference on Artificial Neural Networks, pages 999–1004. Springer, 1997. [NAM ` 03] B. R. Noack, K. Afanasiev, M. Morzyński, G. Tadmor, and F. Thiele. A hierarchy of low-dimensional models for the transient and post-transient cylinder wake. Journal of Fluid Mechanics, 497:335–363, 2003. [NW06] J. Nocedal and S. J. Wright. Numerical Optimization. Springer-Verlag, 2006. [OHK06] W. Ott, B. Hunt, and V. Kaloshin. The effect of projections on fractal sets and measures in Banach spaces. Ergodic Theory and Dynamical Systems, 26(3):869–891, 2006. [Osb75] J. Osborn. Spectral approximation for compact operators. Mathematics of Computation, 29(131):712–725, 1975. [PDM12] J. J. Palis and W. De Melo. Geometric Theory of Dynamical Systems: An Introduction. Springer Science& Business Media, 2012. [Pei17] S. Peitz. Exploiting Structure in Multiobjective Optimization and Optimal Control. PhD thesis, Paderborn University, 2017. [PGS17] K. Padberg-Gehle and C. Schneide. Network-based study of Lagrangian transport and mixing. Nonlinear Processes in Geophysics, 24(4):661– 671, 2017. [PH11] K. J. Painter and T. Hillen. Spatio-temporal chaos in a chemotaxis model. Physica D: Nonlinear Phenomena, 240(4):363–375, 2011. [PK17] S. Peitz and S. Klus. Koopman operator-based model reduction for switched-system control of PDEs. arXiv preprint arXiv:1710.06759, 2017. [Pol93] M. Pollicott. Lectures on ergodic theory and Pesin theory on compact manifolds. Number 180 in London Mathematical Society Lecture Notes Series. Cambridge University Press, 1993. 142 Bibliography [PRK92] J. C. Principe, A. Rathie, and J. M. Kuo. Prediction of chaotic time series with neural networks and the issue of dynamic modeling. International Journal of Bifurcation and Chaos, 2(04):989–996, 1992. [Rem96] D. Rempfer. Investigations of boundary layer transition via Galerkin projections on empirical eigenfunctions. Physics of Fluids, 8(1):175–188, 1996. [RMB ` 09] C. W. Rowley, I. Mezić, S. Bagheri, P. Schlatter, and D. S. Henningson. Spectral analysis of nonlinear flows. Journal of Fluid Mechanics, 641:115–127, 2009. [RMB ` 10] C. W. Rowley, I. Mezić, S. Bagheri, P. Schlatter, and D. S. Henningson. Reduced-order models for flow control: balanced models and Koopman modes. In Seventh IUTAM Symposium on Laminar-Turbulent Transition, IUTAM Bookseries, pages 43–50. Springer, 2010. [Rob94] J. C. Robinson. Inertial manifolds for the Kuramoto-Sivashinsky equation. Physics Letters A, 184(2):190–193, 1994. [Rob05] J. C. Robinson. A topological delay embedding theorem for infinitedimensional dynamical systems. Nonlinearity, 18:2135–2143, 2005. [Rob08] J. C. Robinson. A topological time-delay embedding theorem for infinitedimensional cocycle dynamical systems. Discrete& Continuous Dynamical Systems – B, 9(3-4), 2008. [Rob10] J. C. Robinson. Dimensions, Embeddings, and Attractors, volume 186 of Cambridge Tracts in Mathematics. Cambridge University Press, 2010. [Row06] C. W. Rowley. Model reduction for fluids, using balanced proper orthogonal decomposition. In Modeling And Computations In Dynamical Systems: In Commemoration of the 100th Anniversary of the Birth of John von Neumann, pages 301–317. World Scientific, 2006. [RTV17] S. Rogg, S. Trenz, and S. Volkwein. Trust-Region POD using APosteriori Error Estimation for Semilinear Parabolic Optimal Control Problems. Konstanzer Schriften in Mathematik, 359, 2017. [RW07] G. Röst and J. Wu. Domain-decomposition method for the global dynamics of delay differential equations with unimodal feedback. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 463(2086):2655–2669, 2007. [Sch10] P. J. Schmid. Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics, 656:5–28, 2010. 143 Bibliography [SHD01] C. Schütte, W. Huisinga, and P. Deuflhard. Transfer operator approach to conformational dynamics in biomolecular systems. In Ergodic Theory, Analysis, and Efficient Simulation of Dynamical Systems, pages 191– 223. Springer-Verlag, 2001. [Sir87] L. Sirovich. Turbulence and the dynamics of coherent structures part I: coherent structures. Quarterly of Applied Mathematics, 45(3):561–571, 1987. [Siv77] G. Sivashinsky. Nonlinear analysis of hydrodynamic instability in laminar flames – I. Derivation of basic equations. Acta Astronautica, 4(1112):1177–1206, 1977. [SMRH16] Y. Susuki, I. Mezić, F. Raak, and T. Hikihara. Applied Koopman operator theory for power systems technology. Nonlinear Theory and Its Applications, IEICE, 7(4):430–459, 2016. [Spr94] J. C. Sprott. Some simple chaotic flows. Physical Review E, Statistical physics, plasmas, fluids, and related interdisciplinary topics, 50(2):R647–R650, 1994. [Sta99] J. Stark. Delay embeddings for forced systems. I. Deterministic forcing. Journal of Nonlinear Science, 9(3):255–332, 1999. [Stu12] E. Stumpf. On a differential equation with state-dependent delay. Journal of Dynamics and Differential Equations, 24(2):197–248, 2012. [SV09] T. Sahai and A. Vladimirsky. Numerical methods for approximating invariant manifolds of delayed systems. SIAM Journal on Applied Dynamical Systems, 8(3):1116–1135, 2009. [SYC91] T. Sauer, J. A. Yorke, and M. Casdagli. Embedology. Journal of Statistical Physics, 65(3-4):579–616, 1991. [Tak81] F. Takens. Detecting strange attractors in turbulence. In Dynamical Systems and Turbulence, Warwick 1980, Lecture Notes in Mathematics, volume 898, pages 366–381. Springer-Verlag, 1981. [Tem97] R. Temam. Infinite-Dimensional Dynamical Systems in Mechanics and Physics, volume 68 of Applied Mathematical Sciences. Springer-Verlag, 1997. [Tes12] G. Teschl. Ordinary Differential Equations and Dynamical Systems, volume 140 of Graduate Studies in Mathematics. American Mathematical Society, 2012. [The90] J. Theiler. Estimating fractal dimension. Journal of the Optical Society of America A, 7(6):1055–1073, 1990. 144 Bibliography [TRL ` 14] J. H. Tu, C. W. Rowley, D. M. Luchtenburg, S. L. Brunton, and J. N. Kutz. On dynamic mode decomposition: theory and applications. Journal of Computational Dynamics, 1(2):391–421, 2014. [Tu12] P. N. V. Tu. Dynamical Systems: An Introduction with Applications in Economics and Biology. Springer-Verlag, 2012. [Tuc02] W. Tucker. A rigorous ODE solver and Smale’s 14th problem. Foundations of Computational Mathematics, 2(1):53–117, 2002. [TW07] J. I. Tello and M. Winkler. A chemotaxis system with logistic source. Communications in Partial Differential Equations, 32(6):849–877, 2007. [Vol11] S. Volkwein. Model reduction using proper orthogonal decomposition. Lecture Notes, Institute of Mathematics and Scientific Computing, University of Graz, see http: // www. math. uni-konstanz. de/ numerik/ personen/ volkwein/ teaching/ POD-Vorlesung. pdf , 2011. [Whi36] H. Whitney. Differentiable Manifolds. Annals of Mathematics, 37(3):645–680, 1936. [Wig03] S. Wiggins. Introduction to Applied Nonlinear Dynamical Systems and Chaos, volume 2 of Texts in Applied Mathematics. Springer-Verlag, 2003. [Wil04] S. Willard. General Topology. Dover Books on Mathematics, 2004. [Win10] M. Winkler. Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source. Communications in Partial Differential Equations, 35(8):1516–1537, 2010. [WKR15] M. O. Williams, I. G. Kevrekidis, and C. W. Rowley. A data–driven approximation of the Koopman operator: Extending dynamic mode decomposition. Journal of Nonlinear Science, 25(6):1307–1346, 2015. [Yos80] K. Yosida. Functional Analysis. Classics in Mathematics. SpringerVerlag, 1980. [ZBMM06] H. Zhang, A. C. Berg, M. Maire, and J. Malik. SVM-KNN: Discriminative nearest neighbor classification for visual category recognition. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, pages 2126–2136. IEEE, 2006. [ZDG18] A. Ziessler, M. Dellnitz, and R. Gerlach. The numerical computation of unstable manifolds for partial differential equations by embedding techniques. In preparation, 2018. [ZPD18] A. Ziessler, S. Peitz, and M. Dellnitz. A set-oriented Koopman operator based continuation method. In preparation, 2018. 145 Bibliography [ZW92] J. P. Zbilut and C. L. Webber. Embeddings and delays as derived from quantification of recurrence plots. Physics Letters A, 171(3-4):199–203, 1992. 146