It's hard enough to be on the absolute cutting edge of any field. For me, that field is AI, specifically computer vision. We (Electrical Engineering) were asked to participate in D3EM to bring the D (data-driven) piece to the program. We bring the informatics. We bring the advanced algorithmic methodologies, the models, the nuts and bolts of what most refer to in a hand-wavy fashion as "AI." There are legs to that. We bring the legs.
This field does not move slowly. It advances by light years at almost every conference throughout the year. Things that are relatively new in "normal time" are centuries old in machine learning world. Take, for instance, Generative Adversarial Networks (GANs). Most people are just now starting to hear about them and the impressive generation capabilities they bring to the fight. In the CV world, a GAN is already a time-tested and true concept. Most people that work in the CV field can't image life without them. Hundreds, if not thousands, of flavors of GANs have been proposed since Ian Goodfellow proposed the first GAN at NIPS in 2014. They are capable of generating all sorts of data, approximating all sorts of distributions. For context, MLPs (neural networks) are nonlinear function approximations - thats why they're so attractive to almost every field, you can just pump data in and get a black box that learns and will tell you the patterns, and classify or regress new points with no a-priori knowledge about the original model. GANs did that for data generation - generating approximated samples from functions, images, time series data like videos and audio, all of it. GANs are evolving at light speed, they have come so far its impossible to fully wrap your head around all of it. And the first one was proposed 6 years ago. 6 years. And there have been thousands of publications mentioning GANs, proposing new ones or improving them in some capacity.
Let that sink in, a term that was created 6 years ago has 301m hits on a google search. For context, MRI has 100m, particle physics has 182m, and relativity has 28m. In 6 years, this one concept has surpassed the volume of information (loosely interpreted, obviously) as fields that have existed for decades and changed the face of science. In 6 years.
Don't blink, you'll miss it.
As described above, Deep Learning is advancing at a breakneck pace. It takes one's full, 100% engagement to keep ahead of that field, less enough make contributions. If you take your eye off that ball, for even a second, you're going to miss something - probably something important. You won't be thinking about the problems that the DL community is facing and how to fix them. You won't see the new optimizer or algorithm when it comes out, one that all your peers are discussing at the "water cooler." You're now late to the party.
So, why would you take your eye off the ball? Why would you stop building and testing models, and keeping ahead of the literature as it comes out?
Maybe to look at another target. One that's completely unrelated.
Materials Science is rapidly adopting deep learning methodologies, but is a few years behind. This is good and bad. Its good, since there is a lot of DL-based methodologies that can be ported to materials science. This would make a great career for a materials science researcher, as its very novel in that field. However, this is bad for the ML practitioner that is operating in that space. A new application of an existing DL framework is great in the materials science community, but the DL community is less interested. For instance, applying a GAN to a materials dataset is very new to the field of materials science, but GANs are not new to the DL community. Someone that is seeking a Ph.D. in a field like Electrical Engineering isn't going to get kudos for applying an existing framework to a new problem. A materials scientist would.
It is less than productive to split focus. To get to the point of "research level" understanding in two entirely different fields - here it's DL and Materials Science - takes a significant amount of time and energy, and sanity. The concept of an "interdisciplinary team" is useful, but only if all the experts understand the nuts and bolts of other experts. In some capacity it is good for a DL researcher to explain how a CNN works to someone that doesn't understand it. In some situations it's not as good, and more effective for one individual that really understands both to find applications and parallels in both fields that only they can see.
For deep learning, either you're contributing to the common body of DL knowledge or you're applying it to a subdomain on which it has a useful effect. Doing both concurrently is difficult, time consuming, and sometimes proves ineffective.
At best, interdisciplinary research can provide unique insights that will advance multiple fields simultaneously. At worst, it creates a situation where an individual is spread too thin, looking left when they should be looking right, and eventually doomed to mediocracy in both fields.