I sadly work with the Cauchy distribution all the time. It is the natural distribution function for the data I work with, although it is part of a mixture distribution.
People regularly mistake Cauchy driven data for normally driven data as people do not realize how common it is nature. Poisson's comment to Laplace about only needing to footnote the existence of the Cauchy distribution as it will not be encountered in practice hasn't helped any.
One thing that Bayesian statistics lacks, that Frequentists have in abundance is a nice, pre-built set of diagnostic tools.
I am philosophically neutral on the Bayesian/Fisherian/Neyman-Pearson issue. I am a deep believer in doing what works. I also have a strong preference for things like the t-test (properly used) as it requires no thinking and no work. Cost-benefit should never be ignored.
I do think that part of the genius of Frequentist statistics is its decision tree or algorithmic nature. If you encounter data like this, then analyze it like that. If some different assumption is present or a warning statistic clicks on as a problem then do some other thing instead as a robustness check.
But a Frequentist is presuming to know the true model of the world and so has to be able to do this. A Bayesian is inducing the model from the data. Still, it wouldn't kill us to have a decision tree like...if your problem looks like this then the following likelihood functions could work to model the problem. And further, the following posterior simulations do X,Y,Z in practice. Except for the conjugate families, we don't really have this.