The value of any theoretical model is its predictive power over the experimental data and the power of the data is how well it constrains the model. The Bayesian method of inference captures the essence of this through simple and straightforward mathematics. It actually complements and generalises the well-known 'frequentist' methods of model evaluation, like minimization of chi-squared or the standard p-value tests. This talk shall be aimed at understanding how this method of inference can formally be implemented for model evaluation while still keeping it relatably intuitive or rather intuitively relatable. It shall be pitched at an introductory level, and illustrated with simple examples, so as to serve as a pointer for anyone who might be willing to apply this technique for their respective problems.