ComparisonMetrics Type

Comparison metrics that can be derived from a binary confusion matrix

Record fields

Record Field Description

Accuracy

Full Usage: Accuracy

Field type: float

accuracy (ACC)

Field type: float

BalancedAccuracy

Full Usage: BalancedAccuracy

Field type: float

balanced accuracy (BA)

Field type: float

DiagnosticOddsRatio

Full Usage: DiagnosticOddsRatio

Field type: float

Diagnostic odds ratio (DOR)

Field type: float

F1

Full Usage: F1

Field type: float

F1 score (harmonic mean of precision and sensitivity)

Field type: float

FN

Full Usage: FN

Field type: float

false negatives: condition-positive labels incorrectly predicted as negatives

Field type: float

FP

Full Usage: FP

Field type: float

false positives: condition-negative labels incorrectly predicted as positives

Field type: float

FallOut

Full Usage: FallOut

Field type: float

fall-out or false positive rate (FPR)

Field type: float

FalseDiscoveryRate

Full Usage: FalseDiscoveryRate

Field type: float

false discovery rate (FDR)

Field type: float

FalseOmissionRate

Full Usage: FalseOmissionRate

Field type: float

false omission rate (FOR)

Field type: float

FowlkesMallowsIndex

Full Usage: FowlkesMallowsIndex

Field type: float

Fowlkes–Mallows index (FM)

Field type: float

Informedness

Full Usage: Informedness

Field type: float

informedness or bookmaker informedness (BM)

Field type: float

Markedness

Full Usage: Markedness

Field type: float

markedness (MK) or deltaP (Δp)

Field type: float

Missrate

Full Usage: Missrate

Field type: float

miss rate or false negative rate (FNR)

Field type: float

N

Full Usage: N

Field type: float

Condition-negatives: all negative labels in the sample

Field type: float

NegativeLikelihoodRatio

Full Usage: NegativeLikelihoodRatio

Field type: float

Negative likelihood ratio (LR-)

Field type: float

NegativePredictiveValue

Full Usage: NegativePredictiveValue

Field type: float

negative predictive value (NPV)

Field type: float

P

Full Usage: P

Field type: float

Condition-positives: all positive labels in the sample

Field type: float

PhiCoefficient

Full Usage: PhiCoefficient

Field type: float

phi coefficient (φ or rφ) or Matthews correlation coefficient (MCC)

Field type: float

PositiveLikelihoodRatio

Full Usage: PositiveLikelihoodRatio

Field type: float

Positive likelihood ratio (LR+)

Field type: float

Precision

Full Usage: Precision

Field type: float

precision or positive predictive value (PPV)

Field type: float

Prevalence

Full Usage: Prevalence

Field type: float

Prevalence

Field type: float

PrevalenceThreshold

Full Usage: PrevalenceThreshold

Field type: float

prevalence threshold (PT)

Field type: float

SampleSize

Full Usage: SampleSize

Field type: float

all observations in the sample

Field type: float

Sensitivity

Full Usage: Sensitivity

Field type: float

sensitivity, recall, hit rate, or true positive rate (TPR)

Field type: float

Specificity

Full Usage: Specificity

Field type: float

specificity, selectivity or true negative rate (TNR)

Field type: float

TN

Full Usage: TN

Field type: float

true negatives: correctly predicted condition-negative labels

Field type: float

TP

Full Usage: TP

Field type: float

true positives: correctly predicted condition-positive labels

Field type: float

ThreatScore

Full Usage: ThreatScore

Field type: float

threat score (TS) or critical success index (CSI)

Field type: float

Static members

Static member Description

ComparisonMetrics.binaryThresholdMap (actual, predictions)

Full Usage: ComparisonMetrics.binaryThresholdMap (actual, predictions)

Parameters:
    actual : seq<bool>
    predictions : seq<float>

Returns: (float * ComparisonMetrics)[]
actual : seq<bool>
predictions : seq<float>
Returns: (float * ComparisonMetrics)[]

ComparisonMetrics.binaryThresholdMap (actual, predictions, thresholds)

Full Usage: ComparisonMetrics.binaryThresholdMap (actual, predictions, thresholds)

Parameters:
    actual : seq<bool>
    predictions : seq<float>
    thresholds : seq<float>

Returns: (float * ComparisonMetrics)[]
actual : seq<bool>
predictions : seq<float>
thresholds : seq<float>
Returns: (float * ComparisonMetrics)[]

ComparisonMetrics.binaryThresholdMap tm

Full Usage: ComparisonMetrics.binaryThresholdMap tm

Parameters:
Returns: (float * ComparisonMetrics)[]
tm : (float * BinaryConfusionMatrix)[]
Returns: (float * ComparisonMetrics)[]

ComparisonMetrics.calculateAccuracy tp tn samplesize

Full Usage: ComparisonMetrics.calculateAccuracy tp tn samplesize

Parameters:
    tp : float
    tn : float
    samplesize : float

Returns: float

calculates the accuracy (ACC)

tp : float
tn : float
samplesize : float
Returns: float

ComparisonMetrics.calculateBalancedAccuracy tp p tn n

Full Usage: ComparisonMetrics.calculateBalancedAccuracy tp p tn n

Parameters:
    tp : float
    p : float
    tn : float
    n : float

Returns: float

calculates the balanced accuracy (BA)

tp : float
p : float
tn : float
n : float
Returns: float

ComparisonMetrics.calculateDiagnosticOddsRatio tp tn fp fn p n

Full Usage: ComparisonMetrics.calculateDiagnosticOddsRatio tp tn fp fn p n

Parameters:
    tp : float
    tn : float
    fp : float
    fn : float
    p : float
    n : float

Returns: float

calculates the Diagnostic odds ratio (DOR)

tp : float
tn : float
fp : float
fn : float
p : float
n : float
Returns: float

ComparisonMetrics.calculateF1 tp fp fn

Full Usage: ComparisonMetrics.calculateF1 tp fp fn

Parameters:
    tp : float
    fp : float
    fn : float

Returns: float

calculates the F1 score (harmonic mean of precision and sensitivity)

tp : float
fp : float
fn : float
Returns: float

ComparisonMetrics.calculateFallOut fp n

Full Usage: ComparisonMetrics.calculateFallOut fp n

Parameters:
    fp : float
    n : float

Returns: float

calculates the fall-out or false positive rate (FPR)

fp : float
n : float
Returns: float

ComparisonMetrics.calculateFalseDiscoveryRate fp tp

Full Usage: ComparisonMetrics.calculateFalseDiscoveryRate fp tp

Parameters:
    fp : float
    tp : float

Returns: float

calculates the false discovery rate (FDR)

fp : float
tp : float
Returns: float

ComparisonMetrics.calculateFalseOmissionRate fn tn

Full Usage: ComparisonMetrics.calculateFalseOmissionRate fn tn

Parameters:
    fn : float
    tn : float

Returns: float

calculates the false omission rate (FOR)

fn : float
tn : float
Returns: float

ComparisonMetrics.calculateFowlkesMallowsIndex tp fp p

Full Usage: ComparisonMetrics.calculateFowlkesMallowsIndex tp fp p

Parameters:
    tp : float
    fp : float
    p : float

Returns: float

calculates the Fowlkes–Mallows index (FM)

tp : float
fp : float
p : float
Returns: float

ComparisonMetrics.calculateInformedness tp p tn n

Full Usage: ComparisonMetrics.calculateInformedness tp p tn n

Parameters:
    tp : float
    p : float
    tn : float
    n : float

Returns: float

calculates the informedness or bookmaker informedness (BM)

tp : float
p : float
tn : float
n : float
Returns: float

ComparisonMetrics.calculateMarkedness tp fp tn fn

Full Usage: ComparisonMetrics.calculateMarkedness tp fp tn fn

Parameters:
    tp : float
    fp : float
    tn : float
    fn : float

Returns: float

calculates the markedness (MK) or deltaP (Δp)

tp : float
fp : float
tn : float
fn : float
Returns: float

ComparisonMetrics.calculateMissrate fn p

Full Usage: ComparisonMetrics.calculateMissrate fn p

Parameters:
    fn : float
    p : float

Returns: float

calculates the miss rate or false negative rate (FNR)

fn : float
p : float
Returns: float

ComparisonMetrics.calculateMultiLabelROC (actual, predictions)

Full Usage: ComparisonMetrics.calculateMultiLabelROC (actual, predictions)

Parameters:
    actual : 'a[]
    predictions : ('a * float[])[]

Returns: Map<string, (float * float)[]>
actual : 'a[]
predictions : ('a * float[])[]
Returns: Map<string, (float * float)[]>

ComparisonMetrics.calculateNegativeLikelihoodRatio fn p tn n

Full Usage: ComparisonMetrics.calculateNegativeLikelihoodRatio fn p tn n

Parameters:
    fn : float
    p : float
    tn : float
    n : float

Returns: float

calculates the Negative likelihood ratio (LR-)

fn : float
p : float
tn : float
n : float
Returns: float

ComparisonMetrics.calculateNegativePredictiveValue tn fn

Full Usage: ComparisonMetrics.calculateNegativePredictiveValue tn fn

Parameters:
    tn : float
    fn : float

Returns: float

calculates the negative predictive value (NPV)

tn : float
fn : float
Returns: float

ComparisonMetrics.calculatePhiCoefficient tp tn fp fn

Full Usage: ComparisonMetrics.calculatePhiCoefficient tp tn fp fn

Parameters:
    tp : float
    tn : float
    fp : float
    fn : float

Returns: float

calculates the phi coefficient (φ or rφ) or Matthews correlation coefficient (MCC)

tp : float
tn : float
fp : float
fn : float
Returns: float

ComparisonMetrics.calculatePositiveLikelihoodRatio tp p fp n

Full Usage: ComparisonMetrics.calculatePositiveLikelihoodRatio tp p fp n

Parameters:
    tp : float
    p : float
    fp : float
    n : float

Returns: float

calculates the Positive likelihood ratio (LR+)

tp : float
p : float
fp : float
n : float
Returns: float

ComparisonMetrics.calculatePrecision tp fp

Full Usage: ComparisonMetrics.calculatePrecision tp fp

Parameters:
    tp : float
    fp : float

Returns: float

calculates the precision or positive predictive value (PPV)

tp : float
fp : float
Returns: float

ComparisonMetrics.calculatePrevalence p samplesize

Full Usage: ComparisonMetrics.calculatePrevalence p samplesize

Parameters:
    p : float
    samplesize : float

Returns: float

calculates the Prevalence

p : float
samplesize : float
Returns: float

ComparisonMetrics.calculatePrevalenceThreshold fp n tp p

Full Usage: ComparisonMetrics.calculatePrevalenceThreshold fp n tp p

Parameters:
    fp : float
    n : float
    tp : float
    p : float

Returns: float

calculates the prevalence threshold (PT)

fp : float
n : float
tp : float
p : float
Returns: float

ComparisonMetrics.calculateROC (actual, predictions)

Full Usage: ComparisonMetrics.calculateROC (actual, predictions)

Parameters:
    actual : seq<bool>
    predictions : seq<float>

Returns: (float * float)[]
actual : seq<bool>
predictions : seq<float>
Returns: (float * float)[]

ComparisonMetrics.calculateROC (actual, predictions, thresholds)

Full Usage: ComparisonMetrics.calculateROC (actual, predictions, thresholds)

Parameters:
    actual : seq<bool>
    predictions : seq<float>
    thresholds : seq<float>

Returns: (float * float)[]
actual : seq<bool>
predictions : seq<float>
thresholds : seq<float>
Returns: (float * float)[]

ComparisonMetrics.calculateSensitivity tp p

Full Usage: ComparisonMetrics.calculateSensitivity tp p

Parameters:
    tp : float
    p : float

Returns: float

calculates the sensitivity, recall, hit rate, or true positive rate (TPR)

tp : float
p : float
Returns: float

ComparisonMetrics.calculateSpecificity tn n

Full Usage: ComparisonMetrics.calculateSpecificity tn n

Parameters:
    tn : float
    n : float

Returns: float

calculates the specificity, selectivity or true negative rate (TNR)

tn : float
n : float
Returns: float

ComparisonMetrics.calculateThreatScore tp fn fp

Full Usage: ComparisonMetrics.calculateThreatScore tp fn fp

Parameters:
    tp : float
    fn : float
    fp : float

Returns: float

calculates the threat score (TS) or critical success index (CSI)

tp : float
fn : float
fp : float
Returns: float

ComparisonMetrics.create bcm

Full Usage: ComparisonMetrics.create bcm

Parameters:
Returns: ComparisonMetrics
bcm : BinaryConfusionMatrix
Returns: ComparisonMetrics

ComparisonMetrics.create (tp, tn, fp, fn)

Full Usage: ComparisonMetrics.create (tp, tn, fp, fn)

Parameters:
    tp : float
    tn : float
    fp : float
    fn : float

Returns: ComparisonMetrics
tp : float
tn : float
fp : float
fn : float
Returns: ComparisonMetrics

ComparisonMetrics.create (p, n, samplesize, tp, tn, fp, fn, sensitivity, specificity, precision, negativepredictivevalue, missrate, fallout, falsediscoveryrate, falseomissionrate, positivelikelihoodratio, negativelikelihoodratio, prevalencethreshold, threatscore, prevalence, accuracy, balancedaccuracy, f1, phicoefficient, fowlkesmallowsindex, informedness, markedness, diagnosticoddsratio)

Full Usage: ComparisonMetrics.create (p, n, samplesize, tp, tn, fp, fn, sensitivity, specificity, precision, negativepredictivevalue, missrate, fallout, falsediscoveryrate, falseomissionrate, positivelikelihoodratio, negativelikelihoodratio, prevalencethreshold, threatscore, prevalence, accuracy, balancedaccuracy, f1, phicoefficient, fowlkesmallowsindex, informedness, markedness, diagnosticoddsratio)

Parameters:
    p : float
    n : float
    samplesize : float
    tp : float
    tn : float
    fp : float
    fn : float
    sensitivity : float
    specificity : float
    precision : float
    negativepredictivevalue : float
    missrate : float
    fallout : float
    falsediscoveryrate : float
    falseomissionrate : float
    positivelikelihoodratio : float
    negativelikelihoodratio : float
    prevalencethreshold : float
    threatscore : float
    prevalence : float
    accuracy : float
    balancedaccuracy : float
    f1 : float
    phicoefficient : float
    fowlkesmallowsindex : float
    informedness : float
    markedness : float
    diagnosticoddsratio : float

Returns: ComparisonMetrics
p : float
n : float
samplesize : float
tp : float
tn : float
fp : float
fn : float
sensitivity : float
specificity : float
precision : float
negativepredictivevalue : float
missrate : float
fallout : float
falsediscoveryrate : float
falseomissionrate : float
positivelikelihoodratio : float
negativelikelihoodratio : float
prevalencethreshold : float
threatscore : float
prevalence : float
accuracy : float
balancedaccuracy : float
f1 : float
phicoefficient : float
fowlkesmallowsindex : float
informedness : float
markedness : float
diagnosticoddsratio : float
Returns: ComparisonMetrics

ComparisonMetrics.macroAverage bcms

Full Usage: ComparisonMetrics.macroAverage bcms

Parameters:
Returns: ComparisonMetrics
bcms : BinaryConfusionMatrix[]
Returns: ComparisonMetrics

ComparisonMetrics.macroAverage mlcm

Full Usage: ComparisonMetrics.macroAverage mlcm

Parameters:
Returns: ComparisonMetrics
mlcm : MultiLabelConfusionMatrix
Returns: ComparisonMetrics

ComparisonMetrics.macroAverage metrics

Full Usage: ComparisonMetrics.macroAverage metrics

Parameters:
Returns: ComparisonMetrics

calculates comparison metrics as macro average of the given sequence of comparison metrics (all metrics are calculated as the average of the respective metrics)

metrics : seq<ComparisonMetrics>
Returns: ComparisonMetrics

ComparisonMetrics.macroAverageOfMultiLabelPredictions (labels, actual, predictions)

Full Usage: ComparisonMetrics.macroAverageOfMultiLabelPredictions (labels, actual, predictions)

Parameters:
    labels : 'a[]
    actual : 'a[]
    predictions : 'a[]

Returns: ComparisonMetrics
labels : 'a[]
actual : 'a[]
predictions : 'a[]
Returns: ComparisonMetrics

ComparisonMetrics.microAverage mlcm

Full Usage: ComparisonMetrics.microAverage mlcm

Parameters:
Returns: ComparisonMetrics

calculates comparison metrics from the given MultiLabelConfusionMatrix as micro-averages (one-vs-rest binary confusion matrices (TP/TN/FP/FN) are calculated for each label and then aggregated before calculating metrics)

mlcm : MultiLabelConfusionMatrix
Returns: ComparisonMetrics

ComparisonMetrics.microAverage cms

Full Usage: ComparisonMetrics.microAverage cms

Parameters:
Returns: ComparisonMetrics

calculates comparison metrics from multiple binary confusion matrices as micro-averages (all TP/TN/FP/FN are aggregated before calculating metrics)

cms : seq<BinaryConfusionMatrix>
Returns: ComparisonMetrics

ComparisonMetrics.microAverageOfMultiLabelPredictions (labels, actual, predictions)

Full Usage: ComparisonMetrics.microAverageOfMultiLabelPredictions (labels, actual, predictions)

Parameters:
    labels : 'a[]
    actual : 'a[]
    predictions : 'a[]

Returns: ComparisonMetrics
labels : 'a[]
actual : 'a[]
predictions : 'a[]
Returns: ComparisonMetrics

ComparisonMetrics.multiLabelThresholdMap (actual, predictions)

Full Usage: ComparisonMetrics.multiLabelThresholdMap (actual, predictions)

Parameters:
    actual : 'a[]
    predictions : ('a * float[])[]

Returns: Map<string, (float * ComparisonMetrics)[]>
actual : 'a[]
predictions : ('a * float[])[]
Returns: Map<string, (float * ComparisonMetrics)[]>

ComparisonMetrics.ofBinaryPredictions (actual, predictions)

Full Usage: ComparisonMetrics.ofBinaryPredictions (actual, predictions)

Parameters:
    actual : seq<bool>
    predictions : seq<bool>

Returns: ComparisonMetrics
actual : seq<bool>
predictions : seq<bool>
Returns: ComparisonMetrics

ComparisonMetrics.ofBinaryPredictions (positiveLabel, actual, predictions)

Full Usage: ComparisonMetrics.ofBinaryPredictions (positiveLabel, actual, predictions)

Parameters:
    positiveLabel : 'A
    actual : seq<'A>
    predictions : seq<'A>

Returns: ComparisonMetrics
positiveLabel : 'A
actual : seq<'A>
predictions : seq<'A>
Returns: ComparisonMetrics