Derived Datasets

A Derived Dataset in LityxIQ is a dataset created from one or more other datasets.  It is defined by specifying up to seven steps which are processed in sequence.  Each step is optional except the first.
  1. Select an incoming dataset or datasets.  These will be the source data that is goes through further processing steps.  Multiple datasets can be selected, in which case they are stacked on top of each other (analogous a UNIION query operation as would be familiar to SQL users) to start the data processing.
  2. Select a dataset or datasets to join (or merge).  A join is a method for merging two datasets together based on matching up according to the value of a selected field or fields.  Joins are processed in sequence.
  3. Perform a transpose operation if desired.  A transpose operation in LityxIQ translates data situated in multiple columns into data across multiple rows.
  4. Define new fields.  New fields can be defined through mathematical, date, string, or other formulas.  This will add new fields to the dataset.  This also includes New Field Aggregations.
  5. Select a filter.  This is a method for sub-setting the dataset based on logical conditions or other rules, and will determine which rows survive as the View continues processing.
  6. Create aggregation rules.  This allows you to do a final aggregation of the dataset according to one or more fields and create new metrics computed during the aggregation step.  Typically the number of rows in the resulting dataset is decreased by an aggregation since many rows are aggregated into a single row containing summarized values.
  7. Finalization step.  The finalization step allows you to optionally re-order or remove fields, create quality control rules, and assign a row number variable to the dataset.

Creating a derived dataset is discussed in more detail in this article .