Oct 09, 2018

As for over-fitting, I am not in my 50s, and I knew what over-fitting was from my undergrad (or highschool?). The idea is very commonplace in engineering. Numerous times, when encountering a "new term" in a different field, such as overfitting in machine learning, I have found myself thinking in overly complicated ways, because I never thought that what I considered so simple could be the central point of discussion. Perhaps the physicists who could not describe what over-fitting was, were in the same shoes. The other unfortunate scenario is that the interviewers expect certain responses that are just not the generalized things one knows of.

But regarding other matters, I hardly doubt that the sentiment you are suggesting is lost on all.

One point of view is that the self affirmation bias is easy to come these days. The material on basics of machine learning are beyond trivial, but the hype makes money. Even traditional engineering does that to some extent, at least that's what I saw in my education. Same goes for basic finance. I see hype attempts in non-"computer science" engineering research, which amounts to noise. Some writers are oblivious, some are doing it for money. The problem is these days every noise is amplified, and at first I was confusing noise for signal. Particularly at a period when I was practically doing solo research despite having colleagues.

All being said, and I could be too quick or too short on this, large scale computation may automate many aspects of our lives, and funding goes for automation whereas mathematical modeling does not automate things in and of itself.

As for new mathematics, I doubt serious researchers would disagree on looking for more.

I came across this today, you might recognize one of the names, haven't read it and doubt will find the time for a while, but perhaps it could be of interest to you. https://arxiv.org/abs/1608.08225