Announcing CoMETS, our low code data science and model experimentation library
Articles

Announcing CoMETS, our low code data science and model experimentation library

Introduction

A Cosmo Tech Simulation Digital Twin is not just a replica of a real life system in computerized format. It also offers the possibility to simulate the system behavior through time, and by this way test the impact of external perturbations on key performance indicators (KPIs) or test the impact of modifying some parameter of the system on its future evolution, generally with the objective of maximizing its performance.

We call this process experimentation, because it allows running virtual experiments on a digital twin, like what engineers and scientists do to better understand and improve a physical system, before actions are applied in the real world. It also refers to the usual iterative workflow followed by a data scientist to test and tune a model before it’s deployed to prescribe actions to operators and business decision makers.

General outline of the experimentation process on a Simulation Digital Twin

 

In the experimentation context, the Cosmo Tech simulator receives two types of input:

Experimentation is the process of launching multiple, well chosen, simulations by varying levers, in order to gain meaningful insights on the system behavior. Many times, manually changing a few decision variables to create a small number of scenarios is not sufficient to answer your questions about the system. We require additional tools to perform automatic and efficient experimentation. CoMETS (the Cosmo Model Experimentation Toolbox) wants to meet this need.

What is CoMETS? Advanced automatic experimentation

CoMETS is a Python library, released under the Cosmo Tech license, aiming at simplifying automated experimentation, synthetic data generation and hybridization of AI and Simulation Digital Twins. It provides a consistent and easy-to-use interface to manipulate the simulator and construct advanced experimentations.

In this first version, the following types of experimentation are available:

Other experiments like sensitivity analysis, machine learning based surrogate modeling, model calibration, multi-objective optimization, etc. will be made available in later versions. CoMETS also provides a way to easily define reinforcement learning environments for Cosmo Tech simulators.

Why a new library?

The first need for CoMETS came from our own usage of Simulation Digital Twins in customer projects, during which, regardless of the Digital Twin used, and of the customer, we regularly created experimentation scripts to analyze, validate and optimize simulation results. We needed a consistent way to simplify this iterative process, increase our productivity and democratize the access to advanced prescriptive algorithms to non-specialists, especially our customers’ and partners’ citizen data scientists.

Interesting alternative tools already exist for running experiments with simulators, however none of them met all our requirements. We wanted it to be:

One illustration of modularity is the easy combination of basic bricks together to build more advanced experiments. Hence, combining an uncertainty analysis and an optimization results in a ‘robust optimization’ experiment. This experiment may turn out to be very useful in today’s industry, as it allows you to optimize KPIs of your system given an uncertain context.

Moreover, CoMETs is model agnostic and can also be used directly with pure Python models or models with a Python interface (not only Cosmo Tech’s digital twins), like for instance a machine learning model trained on simulated synthetic data.

Using CoMETS

A web application running a Simulation Digital Twin and CoMETS under the hood

The next blog post will illustrate, with a simple but concrete example, how CoMETS can be used to improve a supply chain facing uncertainties.