How-To

Delete Variables in a Scoring Catalog

A scoring catalog can accumulate many scores/variables in it over time, depending on how many scoring jobs populate it and depending on the settings of those scoring jobs. You may wish to clean up the catalog by removing variables. 1. To delete variables from a scoring catalog, you can follow the instructions below. - Navigate to the Scoring Catalogs section of Predict. - Select the desired scoring catalog and click Delete Variables from the Selected Catalog dropdown (or right cl...

Copy a Scoring Catalog

A scoring catalog can be copied, to use as a starting point for a new, independent catalog. The primary keys are retained, but existing variables and scores will not be copied to the new catalog. 1. To copy a scoring catalog, you can follow the instructions below. - Navigate to the Scoring Catalogs section of Predict. - Select the desired scoring catalog and click Copy Catalog from the Selected Catalog dropdown (or right click and select from that menu). Similar to creatin...

Algorithm Settings - Advanced Settings

Each algorithm in LityxIQ has an Advanced Settings tab available. This provides the ability for more advanced users to have detailed control over various parts of the pre-processing that LityxIQ performs when evaluating the data and variables in the model. These settings are initially populated with the most common and useful values that work across many situations, so it is not necessary to change these parameters to build good models. Below, the options are explained. Note that not all option...

Model Performance Analysis for Unsupervised Clustering Models

Unsupervised clustering and segmentation models have some performance analysis results that are unique to these type of models. For general information about Performance Analysis for LityxIQ machine learning models, see https://support.lityxiq.com/378413-Analyzing-Model-Performance. Some analysis types that are mentioned on that page are not available for clustering models. When analyzing performance for clustering models, the unique analysis types that become available are: Cluster Statist...

Model Performance Analysis for Forecasting / Time-Series Models

Time series forecasting models have some performance analysis results that are unique to these type of models. For general information about Performance Analysis for LityxIQ machine learning models, see https://support.lityxiq.com/378413-Analyzing-Model-Performance. Many analysis types that are mentioned on that page are not available for time series models. When analyzing performance for time series models, the unique analysis type that becomes available is: Time Series Predictions This c...

Model Versions, Iterations, and Management

LityxIQ provides fully automated model version control and model management of all machine learning models in the platform. All models are tracked through their version history, allowing you to review and compare performance, or deploy any prior versions of the model. Each model has a Current Version and a Production Version (if it has been put into production). These are easily viewed in the list of models. The Current Version of a model is comprised of two components. Before the decima...

Engagement Analysis

Performing an Engagement Analysis on a model in LityxIQ lets you interactively explore how a marketing campaign will perform if the model were deployed to support it. The key question being answered is: with what percentage of the population should we engage in order to maximize the campaign's profitability? In addition, you can simulate the effects and tradeoffs of engaging with more or less than the optimal number. The performance of the model itself plays an important role in the results of...

Threshold Analysis and Error-Cost Analysis

In LityxIQ, you can perform an interactive analysis of decision threshold and error cost analysis based on a binary classification model's ROC curve. For a given model, the continuous scores (between 0 and 1) that it provides for scored records might be used in external applications to make a binary decision. This analysis area in LityxIQ allows you to understand, simulate, and optimize the result of choosing that binary decision threshold. To get started, select the model and click Evaluate...

Deploying and Monitoring Models

Machine learning models in LityxIQ can be deployed and monitored in multiple ways. Some options available to you are explained below. Model Approvals and Production Versions LityxIQ lets you ensure that models are approved before being put into a production environment. In addition, LityxIQ's model versioning lets you easily determine which version of the model is to be used for production deployments. See https://support.lityxiq.com/396887-Approving-and-Implementing-a-Model for more inf...

Methods for Analyzing, Evaluating, and Exploring Models

After building a machine learning model in LityxIQ, you will likely want to analyze it. LityxIQ provides a wide variety of methods evaluating and exploring models. They are briefly explained here, with links to more detailed overviews of each. Performance Analysis - analyze the quantitative and qualitative performance of the model, and compare models and versions against each other. See https://support.lityxiq.com/378413-Analyzing-Model-Performance for more. Model Explorer - interactively expl...

Analyzing Model Performance

Analyzing the expected performance of any predictive model is a very important part of the model building process. When you build a model in LityxIQ, it will automatically compute a wide variety of performance metrics that are applicable to the model you have built. See https://support.lityxiq.com/289834-Performance-Analysis-Metrics-to-Analyze-Classification-Models and https://support.lityxiq.com/050028-Performance-Analysis-Metrics-to-Analyze-Numeric-Prediction-Models for more information on the...

Using the Model Explorer

The Model Explorer allows you to dig deeply into the inter-relationships of variables in a predictive machine learning model, and their combined effects on model predictions. This opens up greater model interpretability for even the most complex machine learning algorithms like XGBoost and DeepNets. To begin exploring a model, select the model and click Evaluate & Explore -> Model Explorer from the Selected Model menu or the right click menu. When you open the Model Explorer, the inter...

Performance Analysis: Variable Importance

What is Variable Importance? It is the relative importance of each variable in your model’s prediction. The most impactful variable gets a value of 100 and then each subsequent variable is relative to the top one. If the second variable has a value of 88 then that variable is 88% as important as the top variable or you could say 12% less important. How is Variable Importance computed? It is dependent on the algorithm, and is generally related to how the algorithm’s approach estimates its imp...

Enabling and Using the Real-time Scoring (RTS) API in LityxIQ

The LityxIQ rts (Real-time scoring) API allows user applications to request predictions from LityxIQ in real-time using an API, without the need to login or batch large files of data. To use the API, follow these steps: 1) To enable usage of the RTS API on your LityxIQ instance, contact your Lityx representative. You will be provided with an api key specific to your organization. It will be used when making calls to the API to validate and identify your applications. This will be referred to as...

Model Settings: Filter

The filter tab is used for machine learning models to restrict the data that is used for modeling. For more information on how to use the filter dialog, see the following article: https://support.lityxiq.com/806706-Using-the-Filter-Dialog.

Approving and Implementing a Model

A machine learning model in LityxIQ can be setup to require approvals before being implemented into production. This step is not required, but can be used to help ensure that models are reviewed and approved before production-level data is scored based on it. Once a model has been successfully created in LityxIQ, click on the model to highlight it. Then click or right-click and select Implement & Monitor -> Settings. The first tab, Approvals & Production, is used to set approvers and put...

Editing Scoring Job Settings

To edit the settings of a Scoring Job, select the job and click Edit Settings from the Selected Job menu or from the right click menu. Settings Tab In the Settings tab, make the following selections: - Model - Select the model that will be used to create scores. Each model in the library will also show whether it is in production or not next to the model's name. - Version - Select the version of the model to use for the scoring job. - For models not in production, all mod...

Creating a Scoring Job

A scoring job is the way to run new data through predictive models, attaching the model score to a dataset. Scoring jobs can be run manually in real-time or they may be put on an automated schedule. To create a new scoring job, click on Scoring Jobs in the Predict menu, then click the Create New Scoring Job button. The New Scoring Job dialog box will open. Enter the name you want to give to the scoring job and optionally provide a description of the job. You can also select a library other t...

Creating a Scoring Catalog

To create a new scoring catalog in Predict, first click the Scoring Catalogs option in Predict, then click the Create New Catalog button. This will open the New Scoring Catalog dialog. Provide a name and an optional description for your new catalog. You can also specify into which Dataset Library the scoring catalog will be placed. By default, scoring catalogs become datasets in the "Scoring Catalogs" dataset library. Click OK when ready to continue. The Scoring Catalog Settings dialog wil...

What is a Scoring Catalog?

A scoring catalog is a specialized LityxIQ dataset which holds the output of scoring jobs. The output of scoring jobs is usually comprised of scores and/or groups (such as deciles), along with a unique ID for each scored record. In Predict, scoring catalogs can be created and managed within the Scoring Catalogs area. Because scoring catalogs are simply datasets, once they are created, they can also be viewed and managed in the Data Manager. By default, they will appear in the automatically c...

Scheduling/Automating a Scoring Job

Scoring jobs can be setup to run on an automatic schedule (such as every Saturday at 3:00 am) or to run automatically when the dataset being scored has changed. To schedule the scoring job in one of these two ways, first select the correct scoring job from the list of all scoring jobs: Then, click the Selected Job menu button, click Run Job, and click Schedule It. This will open the Scheduling dialog. See the article https://support.lityxiq.com/882325-Using-the-Scheduling-Dialog for help usi...

Executing a Scoring Job

To execute a scoring job, select the correct Model Library that holds the scoring job, then click on the job you wish to run. In the Selected Job or the right click menu, click Run Job -> Run Now. This will immediately run the scoring job. Open the Console to view its progress (https://support.lityxiq.com/953742-Using-the-Console-Window). To put a scoring job on a schedule to run automatically, see https://support.lityxiq.com/126245-SchedulingAutomating-a-Scoring-Job.

Creating a New Predictive Model

To create a new predictive model in LityxIQ, first click the Models option within Predict. Then, follow these steps: 1) Ensure the correct model library is selected. Then, click the Create New Model dropdown and select a model type. The selections available might be different depending on your installation. 2) Provide a name for the new model and optionally provide a description. You can also change the library into which the model is placed. The name provided must not be the same as...

Defining Settings for Building a Model

In Predict, a predictive model is defined based upon a number of options and settings that you provide. To define these settings for a model, first click the Models option within Predict. Then, follow these steps: 1) Click the model you wish to edit, then select Edit Model Build Settings from the Selected Model menu or the right click menu. If a new model was just created, this step is unnecessary, and you can pass directly to Step 2 below. 2) The Model Build Settings dialog will open. Th...

Model Settings: Data & Variables

The Data & Variables tab is used to specify the dataset and variables which will be the basis for your predictive model. The steps to use this tab are described below. Dataset - Select the dataset you wish to use to build the model. The drop-down box will show you a list of all datasets to which you have access, organized by dataset library. The icon allows you to browse the selected dataset (see Step 2 of https://support.lityxiq.com/352376-Browse-a-Dataset for how to use the dataset brows...

Model Settings: Sampling

The Sampling tab allows you to specify how much data will be used to build a model, as well as how that data is to be sampled. Dataset sampling is a good strategy when building a model; it can make the building process faster and more efficient and often has only a small effect on the final model performance. - Maximum Rows for Modeling - Specify how many rows will be used to build and validate the model. If the dataset has fewer rows than this value, all of them will be used. - Samp...

Model Settings: Algorithms

When defining a predictive model, the Algorithms & Settings tab allows you to choose the algorithms that will be used to build the model, and any special options for those algorithms. In LityxIQ, more than one algorithm can be applied to a model, and the same algorithm can be applied multiple times with different settings. The dialog provides a list of each algorithm currently applied to the model. Each algorithm has options associated with it which you can access through the icon set Res...

Changing Algorithm Settings

All algorithms in Predict have several options and settings associated with them. Some are specific to the algorithm itself, while others apply more generally to the modeling process. When editing a model's settings and clicking the Algorithms & Settings tab, you have the option to associate any number of algorithms with the model (see https://support.lityxiq.com/814659-Model-Settings-Algorithms for more information). When you add an algorithm, or edit the settings for an algorithm, the Algori...

Algorithm Settings: Selection & Transformation

The Selection & Transformation tab appears when editing the settings for a modeling algorithm. The options available on the tab will depend upon the algorithm you are editing. The various options are explained below. Many of the options have related advanced settings which can be found in the Advanced Settings tab. - Transform Dependent Variable - Select the transformation to apply to the dependent variable. The default is no transformation. You can select multiple, in which case each ...

Model Settings: Validation

All predictive models should be validated using specialized statistical techniques. The performance metrics that LityxIQ calculates based on the chosen validation method are a good indication of the results you will get when implementing the model in a true production situation. In LityxIQ, by default, this validation step will always be performed, and the Holdout method will be used. Alternative validation methods can also be setup. These model validation options, available on the Validation ...

Model Settings: Missing Value Handling

LityxIQ can be set to automatically handle missing values in the modeling dataset based on the settings on the Missing Value Handling tab. Missing values are often expected in real datasets, so it is important to use appropriate techniques to deal with them during the modeling process. - Remove Variables with Missing Values - Check this box if you wish to remove variables that have too high a percentage of missing values. See the next option for how to define the cutoff percentage. - ...

Model Settings: Output

The Output tab in the Model Build Settings dialog provides additional options for both controlling the model building process and determining how results are stored and reported. - Prediction Format - Use this option to set the format in which predictions from the model are output. The options available are different depending upon the model type. - Create Final Models - This drop-down determines whether LityxIQ will build final models for all iterations or just the single "best" itera...

Executing / Run a Model

Executing, or running, a model in Predict will begin the automated iterative process of variable selection, transformation, model building, and validation as defined in your model settings. To execute a model manually in Predict, follow the steps below. If you want to automatically execute the model whenever the dataset changes, or on a date-based schedule, see https://support.lityxiq.com/232618-Scheduling-Execution-of-a-Model. 1) Click Models in the Predict menu. 2) Select the model i...

Scheduling Execution of a Model

In some cases, you may want to run models off-hours, or on a schedule, instead of immediately in real-time. Or you may wish to have a model automatically refreshed anytime the modeling dataset has changed. In these cases, you can put the modeling process on a schedule. Follow these steps: 1) Select the model, then select Execute Model -> Schedule It from the Selected Model menu or from the right click menu. 2) The Scheduling dialog will appear. See https://support.lityxiq.com/882325...

Performance Analysis: Metrics to Analyze: Classification Models

This provides a description of machine learning algorithm performance metrics that are provided for binary classification-style models. In LityxIQ, model types such as Binary Classification, Affinity, Churn, Response, Risk, and others fall into this category. Lift - An overall measure of the model’s sorting efficiency of targets and can range in value from 0 to 100 with 0 being no better than random. If the model were perfect all of the targets would get assigned higher scores than all of the...

Performance Analysis: Metrics to Analyze: Numeric Prediction Models

This provides a description of machine learning algorithm performance metrics that are provided for numeric prediction models. In LityxIQ, model types such as Numeric Prediction, Customer Value, and Number of Visits fall into this category. Key Terms: - Absolute value is the magnitude of a number without regard to is sign. So -6 and 6 have the same absolute value. - Correlation is a number between −1 and +1 representing the linear dependence of two variables such as the correlation ...

Setting Up a Production Model Scoring Process

1. Model Approval & Production First step is to approve and implement a production model version for the predictive model. See the article https://support.lityxiq.com/396887-Approving-and-Implementing-a-Model for how to approve and implement the model version you want to use for scoring. 2. Scoring Catalog The second step is to ensure you have setup an appropriate scoring catalog to hold the scores. The catalog must be set to use primary keys that match the dataset you will be scoring. If...

Viewing Scores in a Scoring Catalog

Once you have setup a scoring catalog and run a scoring job, you will be ready to output data and use your model scores. 1. To view the scores in a scoring catalog, you can follow the instructions below. - Navigate to the Scoring Catalogs section of Predict. - Select the desired scoring catalog and click Browse from the Selected Catalog dropdown (or right click and select Browse from that menu). The dataset browsing window (see https://support.lityxiq.com/352376-Browse-a-D...