Editing Scoring Job Settings

To edit the settings of a Scoring Job, select the job and click Edit Settings from the Selected Job menu or from the right click menu.

 

Settings Tab

In the Settings tab, make the following selections:

  • Model - Select the model that will be used to create scores.  Each model in the library will also show whether it is in production or not next to the model's name.
  • Version - Select the version of the model to use for the scoring job. 
    • For models not in production, all model versions and iterations that have successfully executed will be shown.
    • For models that are in production, only the production version will be shown.  You will not have a choice to use any other version. The production version of the model will always be used, even as new versions go into production over time.
    • For models that are in production, and the Most Recent and Best Version option has been the one implemented, that will be the only option available.  The version and iteration of the model used for the scoring job will be determined dynamically when the job is executed.
  • Dataset to Score - Select the dataset you wish to run through the model.  For example, this may be a fresh prospect database.  The dataset selected must have the same primary key variables that were used to create the scoring catalog into which the scores will be placed.
  • Scoring Catalog - Select the scoring catalog into which the scores will be placed.  See https://support.lityxiq.com/182622-Creating-a-Scoring-Catalog for more information on scoring catalogs.
  • Grouping Method - This will define the method for how scores are grouped into segments.
    • Use Fixed Boundaries from Model Build - this option will use the boundaries that were created when the model was built.  In this case, the boundaries will remain fixed from one scoring run to the next, regardless of the size or distribution of the scoring dataset.  This option will not guarantee that your groups are equally populated, because the scoring dataset distribution of records may be different from the original modeling dataset.
    • Use Boundaries from Scoring Data - this option will use boundaries created on the fly from the scoring dataset itself.  It will guarantee that each group is equal (other than ties on the boundaries), but you will lose the ability to track changes in decile distribution over time, or to track the degradation of the scoring dataset compared to the original modeling dataset.
    • Classification Segment - this option is only available for certain models, such as trees.  It will provide a code for the tree node the record fell into.
    • Cluster - for unsupervised clustering models, this is the only option available.
    • Do Not Assign Scores Into Groups - this option will skip the grouping process.
  • Number Groups - If you selected one of the first two grouping methods above, this option will determine how many grouped segments are created.  Common selections are Deciles (10 groups), Vigintiles (20 groups), or Percentiles (100 groups).

 

Advanced Tab

In the Advanced tab, make the following selections:

  • Name - This is where you determine the variable name for the scores as they will appear in the scoring catalog.  You can provide template codes that will be replaced automatically with appropriate information when creating the variable name.  The template codes you can use are:
    • [y] to represent the type of value represented.  It will be replaced with either the string "Score" or "Decile" (note, currently "Decile" is used even if you selected another grouping type).   If you had selected the option to output groups on the Settings tab, you must use this template code to distinguish between the two variables that will be placed in the scoring catalog (one variable representing scores, and one representing the groups).
    • [m] for the model name
    • [d] for a date stamp
    • [v] for the model version
    • [t] for a timestamp
  • Overwrite - two options are available:
    • Overwrite existing score with same name - If you select this option, scoring output variables that have the same name as variables already in the scoring catalog will overwrite those variables.  Use this option if you have no need to store multiple versions of scores over time.  This option will also help to keep your scoring catalog from growing too wide.  Note that if you used the template codes [d] or [t] in the Name, the variable name will likely be unique each scoring run, making the Overwrite Existing Score selection not useful.
    • Do Not Overwrite - if you select this option, scoring output variables with the same name as variables already in the scoring catalog will not be allowed to overwrite them.  You will receive a scoring job error in this case.  It can be used as a safety net to ensure existing scores never get overwritten.
  • Delete Old Records - Check this box to delete any records in the scoring catalog that do not match ID's of records currently being scored.  This helps to keep the scoring catalog clean and up-to-date with only the latest records (e.g., only current prospects or customers).  However, you may want to keep a history of all prior scores, even for records that are no longer actively being scored.  In this case, leave this box unchecked.
  • Zero Records is Ok - Check this box if it is ok, and should not produce a scoring job error, if there are no records in the scoring dataset. If left unchecked, zero records will produce a scoring job error. Note that if this box is checked and the scoring dataset has no records, other settings above are still followed (for example, if Delete Old Records is checked, the scoring catalog will wind up empty).

 

Variable Mapping Tab

In most cases, you do not have to worry about mapping variables between the model and the scoring dataset.  But, if it becomes necessary because variable names between the two do not align, the Variable Mapping tab can be used to manually match model variables to dataset variables.  Click the Map Variables box to get started.  The left column shows the names of variables in the model.  Each dropdown box on the right provides a list of all variables in the scoring dataset.  Modify any selections necessary to ensure the variables align correctly.

 

Filter Tab

The Filter tab can be used to specify a subset of the records in the dataset to be used for scoring. See https://support.lityxiq.com/806706-Using-the-Filter-Dialog for more information on using the Filter dialog.

 

 When done, click Save to save your scoring job settings or Cancel to cancel your changes.