The classical contraction principle is one of those basic results in Analysis with many fundamental applications. In this talk, we will examine a variational interpretation of it which turns out to be more flexible. In particular, it can be used to deal with situations where existence and uniqueness of solutions is known or expected. Though many classical situations can be treated, due to time restrictions we will focus on two representative examples: that of initial-value Cauchy problems for autonomous ODE systems, and the case of non-linear, non-variational monotone PDE equations in divergence form. It remains to be seen if this perspective could help in new situations.
The tone of the talk will be elementary. No specialized background is required.
In applied mathematics, effective problem solving begins with precise problem formulation, highlighting the importance of the initial problem-setting phase. Without a clearly defined problem, identifying suitable tools and techniques for resolution becomes arduous and often futile. This transition from problem setting to problem solving is pivotal within the broader framework of knowledge advancement. Despite the remarkable progress of AI tools, they remain reliant on the groundwork laid by human intelligence. Mathematicians, leveraging their adeptness in discerning patterns and relationships within data and variables, play a crucial role during this phase. This lecture will introduce fundamental mathematical concepts encompassing both traditional machine learning and scientific machine learning. The latter offers an optimal platform for the harmonious fusion of problem setting and problem solving, bolstered by profound domain expertise.