I'm not really sure what type of problems you have in mind, but I've been working as a DS/RS in FAANG and your description doesn't seem to match any of the projects I've seen.
There is nothing magical about more data, it's just that it conflates with other type of researchers that are interested in other types of questions. If the objective is trying to classify content of pages, people don't need to a have a robust and interpretable estimate of how each parameter affects the outcomes. If the objective is to measure the sales impact of an online campaign, then having a robust and identifiable estimate of a parameter is important.
There is no amount of data that will solve your identification question. And the idea of brute force really doesn't capture well how ML problems are solved - if anything you would spend more time thinking about the model itself that in a causal-inference setting.
The size of your data is completely orthogonal to causal inference problem. Sometimes you want to measure causality, sometimes you don't.
I never stated it was one or the other. But if you think most big data is set up in a way to do actual causal inference stuff, you are solely mistaken. It's a bunch of disaggregated data with a bunch of noise largely collected independent of each other. Yes, you can do certain quasi-experimental approaches with it, but it's rarely structured in a manner that allows you to do this cleanly. It's largely just brute statistical work with extremely large-Ns to increase confidence.