A popular notion is that data-adaptive machine models are the nee plus ultra of modeling technology that will make human involvement in model-making obsolete. However, the scope of data-adaptive models is fundamentally limited by the scope of the data available to train them, which is necessarily restricted to historical and/or categorical data, and by the scope of model meta-structures that can be learned from these data.
The most powerful models in the world are those that incorporate human intuition and creativity in conjunction with algorithmic adaptivity. The widely popular and powerful model subclass of deep neural networks provides a perfect illustration of this point. Deep neural networks perform remarkably well on many pattern classification and analysis problems, but their meta-structure did not emerge autonomously from algorithmic ingestion of voluminous data sets.
Instead, it was created by human experts who observed the meta-structure of the brain and then (crudely) mimicked it in algorithmic implementations. Thus is generally the case with all approaches to modeling, and the more conceptually complex the domain of interest, the more vital is the incorporation of human expertise into the creation of the model meta-structure.
Once this is done, the algorithms then can be unleashed to perform their computationally intensive inferences within such a structure. But to imagine that the algorithms themselves are capable of the teleological feats needed to generate their own meta-structure without human involvement represents the triumph of hype over real-world experience with models.
Dr. Terry Rickard, Chief Data Scientist, Meraglim™