Testing without Testing: Offline Model Evaluation and Counterfactual Machine Learning

.

The exponential growth of the Internet, driven by increasing reach and speeds have resulted in large amount of information and products available in individual websites. To keep the relevant information for the relevant audiences, metrics received through A/B tests have become the gold standard for many organizations along with some use of AI.

In real world, AI based systems for personalization, search, ranking, etc. rely on heavy A/B testing for improving systems. But such testing can be costly, and time consuming, making businesses reluctant to use them, because they, like bandit approaches, require changing the business. We discuss Counterfactual Machine Learning to enable testing models on offline data.

You’ll hear:

  • Why Counterfactual ML can make a difference
  • A technical introduction to offline policy evaluation
  • Examples of how to instrument existing business processes to enable Counterfactual ML
  • Introduction to Amazon SageMaker Experiments, Amazon SageMaker Model Monitor
  • A/B Testing Machine Learning Models in Production
Presenter







    I'm not a robot: 58 − 48 =

    We’re committed to your privacy. Cambridge uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy.