How-To

Using the Model Explorer

The Model Explorer allows you to dig deeply into the inter-relationships of variables in a predictive machine learning model, and their combined effects on model predictions. This opens up greater model interpretability for even the most complex machine learning algorithms like XGBoost and DeepNets. To begin exploring a model, select the model from the Available Models list in Predict and click the menu item Model Explorer. When you open the Model Explorer, the interface will look similar...

Performance Analysis: Variable Importance

What is Variable Importance? It is the relative importance of each variable in your model’s prediction. The most impactful variable gets a value of 100 and then each subsequent variable is relative to the top one. If the second variable has a value of 88 then that variable is 88% as important as the top variable or you could say 12% less important. How is Variable Importance computed? It is dependent on the algorithm, and is generally related to how the algorithm’s approach estimates its imp...

Enabling and Using the Real-time Scoring (RTS) API in LityxIQ

The LityxIQ rts (Real-time scoring) API allows user applications to request predictions from LityxIQ in real-time using an API, without the need to login or batch large files of data. To use the API, follow these steps: - To enable usage of the RTS API on your LityxIQ instance, contact your Lityx representative. You will be provided with an api key specific to your organization. It will be used when making calls to the API to validate and identify your applications. This will be referred...

Model Settings: Filter

The filter tab is used for machine learning models to restrict the data that is used for modeling. For more information on how to use the filter dialog, see the following article: https://support.lityxiq.com/806706-Using-the-Filter-Dialog.

Approving and Implementing a Model

Before a model may be used for scoring, it first needs to be put into production. Optionally, approvers can be set up and may be different individuals from the modeler. Though it is not a requirement that a model be approved before its use in scoring jobs, approval does guarantee that a reviewed and approved version of the model is always being used. Once a model has been successfully created in LityxIQ, click on the model to highlight it. Then click on Selected Model and choose Approvals and I...

Editing Scoring Job Settings

To edit the settings of a particular scoring job, first select the job to be edited from the comprehensive list of scoring jobs. Next, click the Selected Job menu, and then click Edit Settings. Settings Tab In the Settings tab, make the following selections: - Model - Select the model that will be used to create scores. Each model in the library will also show whether it is in production or not next to the model's name. - Version - Select the version of the model to use for t...

Creating a Scoring Job

A scoring job is the way to run new data through predictive models, attaching the model score to a dataset. Scoring jobs can be run manually in real-time or they may be put on an automated schedule. To create a new scoring job, click on Scoring Jobs in the Predict menu, then click the Create New Scoring Job button. The New Scoring Job dialog box will open. Enter the name you want to give to the scoring job and optionally provide a description of the job. These will both appear in the list ...

Creating a Scoring Catalog

To create a new scoring catalog in Predict, first click the Scoring Catalogs link, then click the Create New Catalog button. This will open the New Scoring Catalog dialog. Provide a name and an optional description for your new catalog. You can also specify into which Dataset Library the scoring catalog will be placed. By default, scoring catalogs become datasets in the "Scoring Catalogs" dataset library. Click OK when ready to continue. The Scoring Catalog Settings dialog will open. ...

What is a Scoring Catalog?

A scoring catalog is a specialized LityxIQ dataset which holds the output of scoring jobs. The output of scoring jobs is usually comprised of scores and/or groups (such as deciles), along with a unique ID for each scored record. In Predict, scoring catalogs can be created and managed within the Scoring Catalogs view. Because scoring catalogs are simply datasets, once they are created, they can also be viewed and managed in the Data Manager. By default, they will appear in the automatically c...

Scheduling/Automating a Scoring Job

Scoring jobs can be setup to run on an automatic schedule (such as every Saturday at 3:00 am) or to run automatically when the dataset being scored has changed. To schedule the scoring job in one of these two ways, first select the correct scoring job from the list of all scoring jobs: Then, click the Selected Job menu button, click Run Job, and click Schedule It. This will open the Scheduling dialog. See the article https://support.lityxiq.com/882325-Using-the-Scheduling-Dialog for help usi...

Executing a Scoring Job

To execute (run) a scoring job, select the correct Model Library that holds the scoring job, then click on the job you wish to run. Click the Selected Job menu button, then Run Job, and finally Run Now. This will immediately run the scoring job. Open the Console to view its progress (https://support.lityxiq.com/953742-Using-the-Console-Window).

Creating a New Predictive Model

To create a new predictive model in LityxIQ, first click the Models link within Predict. Then, follow these steps: 1) Ensure the correct model library is selected. Then, click the Create New Model dropdown and select a model type. The selections available might be different depending on your installation. 2) Provide a name for the new model and optionally provide a description. The name provided must not be the same as an existing model already defined in the current project. Click O...

Defining Settings for Building a Model

In Predict, a predictive model is defined based upon a number of options and settings that you provide. To define these settings for a model, first click the Models within Predict. Then, follow these steps: 1) Select the model in the available models list (it will highlight blue-gray), then select Edit Model Build Settings from the Selected Model menu. If a new model was just created, this step is unnecessary, and you can pass directly to Step 2 below. 2) The Model Build Settings dialo...

Model Settings: Data & Variables

The Data & Variables tab is used to specify the dataset and variables which will be the basis for your predictive model. The steps to use this tab are described below. Dataset - Select the dataset you wish to use to build the model. The drop-down box will show you a list of all datasets to which you have access, organized by dataset library. The icon allows you to browse the selected dataset (see Step 2 of https://support.lityxiq.com/352376-Browse-a-Dataset" (https://support.lityxiq.com/35...

Model Settings: Sampling

The ‘Sampling’ tab, located in the ‘Model Build Settings’ dialog, allows you to specify how much data will be used to build a model, as well as how that data it is to be sampled. Dataset sampling is a good strategy when building a model; it can make the building process faster and more efficient and often has only a small effect on the final model performance. - Maximum Rows for Modeling - Specify how many rows will be used to build and validate the model. If the dataset has fewer rows ...

Model Settings: Algorithms

When defining a predictive model, the Algorithms & Settings tab allows you to choose the algorithms that will be used to build the model, and any special options for those algorithms. In LityxIQ, more than one algorithm can be applied to a model, the same algorithm can be applied multiple times with different settings, or a combination of these. The dialog provides a list of each algorithm currently applied to the model. Each algorithm has options associated with it: - Settings - The Se...

Changing Algorithm Settings

All algorithms in Predict have several options and settings associated with them. Some are specific to the statistical algorithm itself, while others apply more generally to the modeling process. When editing a model's settings and clicking the Algorithms & Settings tab, you have the option to associate any number of algorithms with the model (see https://support.lityxiq.com/814659-Model-Settings-Algorithms for more information). When an algorithm is added, you may delete it from the list usin...

Algorithm Settings: Selection & Transformation

The Selection & Transformation tab appears when editing the settings for a modeling algorithm. The options available on the tab will depend upon the algorithm you are editing. The various options are explained below. Many of the options have related advanced settings which can be found in the Advanced Settings tab. - Bin Categorical Predictors - If this option is turned on, Predict will search for optimal ways to bin (i.e., combine the categories) for categorical predictors. Turn it on ...

Model Settings: Validation

All predictive models should be validated using specialized statistical techniques. In LityxIQ, by default, this validation step will always be performed. Alternative validation methods can also be setup. These model validation options, available on the Validation tab, are described below. Summary Validation Method Definition Common Options Unique Options Holdout Modeling is performed on the Training Set Pct portion and the remaining portion is used for validat...

Model Settings: Missing Value Handling

LityxIQ can be set to automatically handle missing values in the modeling dataset based on the settings on the Missing Value Handling tab. Missing values are often expected in real datasets, so it is important to use appropriate techniques to deal with them during the modeling process. - Remove Variables with Missing Values - Check this box if you wish to remove variables that have too high a percentage of missing values. See the next option for how to define the cutoff percentage. ...

Model Settings: Output

The ‘Output’ tab in the ‘Model Build Settings’ dialog provides additional options for both controlling the model building process and determining how results are stored and reported. - Prediction Format - Use this option to set the format in which predictions from the model are output. The options available are different depending upon the model type. - Create Final Models - This drop-down determines whether PREDICT will build final models for all iterations or just the single "best" i...

Executing / Run a Model

Executing, or running, a model in Predict will begin the automated iterative process of variable selection, transformation, model building, and validation as defined in your model settings. To execute a model in Predict, follow these steps: 1) Click Models in the Predict menu. 2) Select the model in the available models list and click Execute Model -> Execute Now from the Selected Model menu. 3) Click ‘Yes’ to confirm. 4) To watch the loading process and any related mess...

Scheduling Execution of a Model

In some cases, you may want to run models off-hours, or on a schedule, instead of immediately in real-time. Or you may wish to have a model automatically refreshed anytime the modeling dataset has changed. In these cases, you can put the modeling process on a schedule. First make sure you have clicked the Predictive Models link in the Predict Links menu. Then follow these steps: 1) Select the model in the Available Models list, then select Execute Model -> Schedule It from the Selected Mod...

Performance Analysis: Metrics to Analyze: Classification Models

This provides a description of machine learning algorithm performance metrics that are provided for binary classification-style models. In LityxIQ, model types such as Binary Classification, Affinity, Churn, Response, Risk, and others fall into this category. Lift - An overall measure of the model’s sorting efficiency of targets and can range in value from 0 to 100 with 0 being no better than random. If the model were perfect all of the targets would get assigned higher scores than all of the...

Performance Analysis: Metrics to Analyze: Numeric Prediction Models

This provides a description of machine learning algorithm performance metrics that are provided for numeric prediction models. In LityxIQ, model types such as Numeric Prediction, Customer Value, and Number of Visits fall into this category. Key Terms: - Absolute value is the magnitude of a number without regard to is sign. So -6 and 6 have the same absolute value. - Correlation is a number between −1 and +1 representing the linear dependence of two variables such as the correlation ...

Setting Up a Production Model Scoring Process

1. Model Approval & Production First step is to approve and implement a production model version for the predictive model. See the article https://support.lityxiq.com/396887-Approving-and-Implementing-a-Model for how to approve and implement the model version you want to use for scoring. 2. Scoring Catalog The second step is to ensure you have setup an appropriate scoring catalog to hold the scores. The catalog must be set to use primary keys that match the dataset you will be scoring. If...

Viewing Scores in a Scoring Catalog

Once you have setup a scoring catalog and run a scoring job, you will be ready to output data and use your model scores. 1. To view the scores in a scoring catalog, you can follow the instructions below. - Navigate to the Scoring Catalogs section of Predict. - Select the desired scoring catalog select Browse from the Selected Catalog dropdown (or right click and select Browse from that menu). The dataset browsing window (see https://support.lityxiq.com/352376-Browse-a-Data...