Machine learning models in LityxIQ can be deployed and monitored in multiple ways. Some options available to you are explained below.
Model Approvals and Production Versions
LityxIQ lets you ensure that models are approved before being put into a production environment. In addition, LityxIQ's model versioning lets you easily determine which version of the model is to be used for production deployments. See https://support.lityxiq.com/396887-Approving-and-Implementing-a-Model for more information on these concepts.
Deploying a Model in LityxIQ
The simplest way to deploy a LityxIQ model is to use scoring catalogs and scoring jobs to get predicted model scores against a new dataset. LityxIQ automatically handles everything related to the model, algorithm, variable transformations, and other details to ensure valid model scores can be computed. See https://support.lityxiq.com/340391-What-is-a-Scoring-Catalog for more information about scoring catalogs, and https://support.lityxiq.com/906477-Creating-a-Scoring-Job for more information about scoring jobs. Deploying to external applications would then be accomplished by automating the export of the scoring catalog to an external system such as Snowflake or Redshift, or pushing a file to a secure FTP site to be picked up and processed by other platforms.
Deploying a Model in Your System
A LityxIQ model can also be deployed directly in your system using LityxIQ's Real-time Scoring API. See https://support.lityxiq.com/169519-Enabling-and-Using-the-Real-time-Scoring-RTS-API-in-LityxIQ for more information.
Monitoring a Deployed Model's Performance
Once deployed, LityxIQ lets you monitor the model's performance and variable drift using a Model Monitoring Dashboard. These are very simple to setup. More documentation will be available soon.