Seminars

Math-Fi seminar on 12 Mar.

2021.03.12 Fri up
  • Date: 12 Mar. (Fri.)
  • Place: On the Web
  • Time: 16:30 – 18:00
  • Speakers: Ju-Yi Yen (University of Cincinatti), I-Hsun Chen (Academia Sinica), Te-Chun Wang (Academia Sinica)
  • Title: Brownian Additive Functional Averaged

Math-Fi seminar on 18 Feb.

2021.02.18 Thu up
  • Date: 18 Feb. (Thu.)
  • Place: On the Web
  • Time: 16:30  - 18:00
  • Speaker: Libo Li (University of New South Wales)
  • Title: Strong approximation of jump extended CIR and CEV processes and their mean-field extension
  • Abstract:
​In this talk, we discuss the strong approximation of jump extended CIR and CEV processes with alpha stable jumps and their mean-field extension. In particular, we discuss the Euler-Maruyama scheme, derivation of Positive Preserving schemes and finally for the mean-field extension, the propagation of chaos property and the corresponding Euler-Maruyama scheme for the particle system.

Math-Fi seminar on 21 Jan.

2021.01.21 Thu up
  • Date: 21 Jan. (Thu.) 
  • Place: On the Web 
  • Time: 16:30 – 18:00
  • Speaker: Pierre Patie (Cornell University)
  • Title: Interweaving relations
  • Abstract:
In this talk, we introduce the concept of interweaving relations as a strengthening of usual intertwining relations between Markov semigroups. We proceed by providing some  interesting applications of this new idea which includes the characterization of ergodic constants and hypercontractivity estimates for non-self-adjoint semigroups.  We illustrate these results by presenting  several examples that have emerged from the recent literature: discrete-to-continuous interacting particle models, degenerate hypoelliptic Ornstein-Uhlenbeck processes, and diffusion-to-jump  Jacobi processes.
 
This talk is based on joint works with L. Miclo and with P. Cheridito, A. Srapionyan and A. Vaidyanathan.

Math-Fi seminar on 14 Jan.

2021.01.14 Thu up
  • Date: 14 Jan. (Thu.)
  • Place: On the Web
  • Time: 16:30 – 18:00 
  • Speaker: Azmi Maklouf (University of Tunis El Manar)
  • Title: Error estimates for De Vylder type approximations in ruin theory
  • Abstract: 
Due to its practical use, De Vylder’s approximation of the ruin probability has been one of the most popular approximations in ruin theory and its application to insurance. Surprisingly, only heuristic and numerical evidence has supported it. Finding a mathematical estimate for its accuracy has remained an open problem, going from the original paper by De Vylder (1978) through an attempt of justification by Grandell (2000).
We carry out a mathematical and critical treatment of the problem. We more generally consider De Vylder type approximations of any order k, based on fitting the k first moments of the classical risk reserve process. Moreover, we not only deal with the ruin probability, but
also with the moments of the time of ruin, of the deficit at ruin and of the surplus before ruin.
We estimate the approximation errors in terms of the safety loading coefficient, the initial reserve and the approximation order. We show their different behaviours, and the extent to which each relative error remains small or blows up, so that one has to be careful when using this approximation. Our estimates are confirmed by numerical examples.
 

Math-Fi seminar on 17 Dec.

2020.12.17 Thu up
  • Date: 17 Dec. (Thu.)
  • Place: On the Web
  • Time: 16:30 – 18:00
  • Speaker: Hideitsu Hino (Institute for Statistical Mathematics) 

This seminar will be jointly organized with “Ritsumeikan University seminar on Applied Mathematics and Physics”, which is basically for Ritsumeikan University constituents. Moreover, the seminar will be presented in Japanese.
 

Math-Fi seminar on 10 Dec.

2020.12.10 Thu up
  • Date: 10 Dec. (Thu.)
  • Place: On the Web
  • Time: 16:30 – 18:00 
  • Speaker: Anna Aksamit (University of Sydney)
  • Title: Progressive enlargements of filtration: overview, new types, and applications
  • Abstract:
I will start with reviewing the classical results about enlargement of filtration. The main challenge is to find conditions under which martingales in the reference filtration remain semimartingales in the large filtration. If this is the case, the canonical decomposition is of particular interest. I will then present enlargement of a reference filtration through the observation of a random time and a mark. Random time considered is such that its graph is included in the countable union of graphs of stopping times. Mark revealed at this random time is assumed to satisfied generalised Jacod’s condition. Classical Jacod’s condition concerns initial enlargements and says that the conditional law of a random variable w.r.t. elements of a reference filtration is absolutely continuous w.r.t. its unconditional law. Our relaxation of Jacod’s condition accounts for the dynamic structure of the problem. Finally, I will mention some applications of progressive enlargements, in particular to optimal stopping problem.

Math-Fi seminar on 3 Dec.

2020.12.03 Thu up
  • Date: 3 Dec. (Thu.)
  • Place: On the Web
  • Time: 18:00 - 19:30 
  • Speaker: Josef Teichmann (ETH Zurich)
  • Title: Training algorithms and generalized Langevin dynamics
  • Abstract: 
We investigate generalized Langevin dynamics in the sense of Baudoin-Hairer-Teichmann and its convergence to a Gibbs measure of a loss function. We derive a pathwise decay formula for entropy as in Karatzas-Schachermayer-Tschiderer and obtain in this generalized context similar results to Hu-Ren-Siska-Szpruch, which are applied to training neural networks in machine learning.
(joint work with Robert Crowell, Christa Cuchiero, Yuuki Ida and Yuri Imamura)

Math-Fi seminar on 26 Nov.

2020.11.26 Thu up
  • Date: 26 Nov. (Thu.)
  • Place: On the Web
  • Time: 16:30 – 18:00 
  • Speaker: Atsushi Takeuchi (Tokyo Woman’s Christian University)
  • Title: L\’evy processes on Riemannian manifolds
 

Math-Fi seminar on 20 Nov.

2020.11.17 Tue up
  • Date: 20 Nov. (Fri.)
  • Place: On the Web (and Tokyo Satellite Campus of Ritsumeikan University; If you would like to come to the campus, please contact us by email: ritsumeikanmathfiseminar@gmail.com )
  • Time: 19:00 – 20:00
  • Speaker: Tadashi Hayashi (Mitsubishi UFJ trust and banking)
  • Title: The existence and uniqueness of a solution to Double Barrier Backward Doubly Stochastic Differential Equations
  • Abstract:
Double barrier backward doubly stochastic differential equations (DB-BDSDEs, for short) are equations with two different directions of stochastic integrals, i.e., the equations involve both a standard “forward” stochastic integral and a “backward” stochastic integral with two mutually independent standard Brownian motions, and with two reflection barriers. This kind of equations is a joint version of backward doubly stochastic differential equations (BDSDEs, for short) and double barrier backward stochastic differential equations (DB-BSDEs, for short). The former has been introduced by Pardoux and Peng. They ave proved the connection with a class of systems of quasilinear SPDEs and the existence and uniqueness result of such PDEs. The latter has been tackled by Hamadene et al. In this talk, we try to show the outline of the proof for the existence and uniqueness of a solution to DB-BDSDEs by using the “penalization method”, so-called under appropriate conditions. At the end of this talk, we introduce our next some studies that we are tackling now.
 

Math-Fi seminar on 5 Nov.

2020.11.05 Thu up
  • Date: 5 Nov. (Thu.)
  • Place: On the Web
  • Time: 18:00 -19:30
  • Speaker: Johannes Ruf (London School of Economics and Political Science)
  • Title: Hedging with linear regressions and neural networks
  • Abstract: 
We study the use of neural networks as nonparametric estimation tools for the hedging of options. To this end, we design a network, named HedgeNet, that directly outputs a hedging strategy given relevant features as input. This network is trained to minimise the hedging error instead of  the pricing error. Applied to end-of-day and tick prices of S&P 500 and Euro Stoxx 50 options, the network is able to reduce the mean squared hedging error of the Black-Scholes benchmark significantly. We illustrate, however, that a similar benefit arises by a simple linear regression model that incorporates the leverage effect. Finally, we argue that outperformance of neural networks previously reported in the literature is most likely due to a lack of data hygiene. In particular, data leakage is sometimes unnecessarily introduced by a faulty training/test data split, possibly along with an additional ‘tagging’ of data.
(Joint work with Weiguan Wang)