An Applied Mathematician With an Unexpected Toolbox
Source:https://www.quantamagazine.org/an-applied-mathematician-strengthens-ai-with-pure-math-20230301/ An Applied Mathematician With an Unexpected Toolbox 2023-03-02 21:58:02

Is it fair to describe the relationship between pure and applied math as ever-evolving?

Yes. It’s a bit unfortunate, the fact that we are even discussing the relationship between pure and applied math. That means they are separate entities.

Look at the olden days. Look at Gauss, Fermat or Euler. Even people as late as von Neumann or Hilbert. They don’t seem to make that distinction. To them, everything is pure math, everything is applied math.

Gauss’s work is not just quadratic reciprocity and Gaussian curvature. It’s also things like least squares problems and trying to find trajectories of planets. Essentially, he invented linear regression. That’s very important in statistics.

Look at Hilbert’s famous list of 23 problems. Some of them have deep roots in applied math and dynamical systems. Some of them have roots in pure math and logic.

Von Neumann was interested in quantum mechanics, mathematical logic, numerical analysis, game theory and operator algebra.

Of course both areas are now so broad that it’s impossible for anyone to know everything. There are certain things that a pure mathematician, I think, ought to know in applied math. And applied mathematicians have a lot to gain, frankly speaking, by increasing their awareness of modern tools in geometry, topology and algebra.

In a 2020 paper, you connected deep neural networks with topology. How?

 It used to be that a computer found it very hard to do something that a human can do easily: recognizing, say, that a coffee mug isn’t a cat. Even a young child can do it relatively easily. But a computer didn’t have that kind of capacity.

That started to change in about 2012. Deep neural networks were key, meaning neural networks with many layers. What happened, I guess, was that the layers mean something. That’s my take.

I studied this with my Ph.D. student Greg Naitzat, who is now at Facebook. The idea was: Let’s take, for instance, the set of all cat images and the set of all images that aren’t cats. We’re going to view them as [topological shapes, or manifolds]. One is the manifold of cats and the other is the manifold of non-cats. These are going to be intertwined in some complicated way. Why? Because there are certain things that look very much like cats that are not a cat. Mountain lions sometimes get mistaken for cats. Replicas. The big thing is, two manifolds are intertwined in some very complex manner.

How do these elucidate neural networks?

We carried out experiments to show that these manifolds get simplified. Originally, it was two complex shapes that were intricately intertwined, but it gets simplified. How do I measure this simplification in the shapes? Well, there’s a tool that’s a backbone of computational topology. This allows us to measure the shape of these objects.

What is this tool?

It’s persistent homology.

First, homology is essentially a way to classify different holes of different types of geometric objects up to deformation. Holes that look very different in geometry look identical from the perspective of homology.

What if I only have points sampled from a manifold rather than knowledge of the entire manifold? Like, for instance, the image of a cat: What’s the difference between the image of a cat you see on a computer screen and the actual cat itself? An image has pixels, so if you zoom in far enough, you’re just going to see discrete dots. In that case, how do I talk about homology?

Uncategorized Source:https://www.quantamagazine.org/an-applied-mathematician-strengthens-ai-with-pure-math-20230301/

Leave a Reply

Your email address will not be published. Required fields are marked *