Comparison metrics that can be derived from a binary confusion matrix
Record Field | Description |
Full Usage:
Accuracy
Field type: float
|
|
Full Usage:
BalancedAccuracy
Field type: float
|
|
Full Usage:
DiagnosticOddsRatio
Field type: float
|
|
Full Usage:
F1
Field type: float
|
|
Full Usage:
FN
Field type: float
|
|
Full Usage:
FP
Field type: float
|
|
Full Usage:
FallOut
Field type: float
|
|
Full Usage:
FalseDiscoveryRate
Field type: float
|
|
Full Usage:
FalseOmissionRate
Field type: float
|
|
Full Usage:
FowlkesMallowsIndex
Field type: float
|
|
Full Usage:
Informedness
Field type: float
|
|
Full Usage:
Markedness
Field type: float
|
|
Full Usage:
Missrate
Field type: float
|
|
Full Usage:
N
Field type: float
|
|
Full Usage:
NegativeLikelihoodRatio
Field type: float
|
|
Full Usage:
NegativePredictiveValue
Field type: float
|
|
Full Usage:
P
Field type: float
|
|
Full Usage:
PhiCoefficient
Field type: float
|
|
Full Usage:
PositiveLikelihoodRatio
Field type: float
|
|
Full Usage:
Precision
Field type: float
|
|
Full Usage:
Prevalence
Field type: float
|
|
Full Usage:
PrevalenceThreshold
Field type: float
|
|
Full Usage:
SampleSize
Field type: float
|
|
Full Usage:
Sensitivity
Field type: float
|
|
Full Usage:
Specificity
Field type: float
|
|
Full Usage:
TN
Field type: float
|
|
Full Usage:
TP
Field type: float
|
|
Full Usage:
ThreatScore
Field type: float
|
|
Static member | Description |
Full Usage:
ComparisonMetrics.binaryThresholdMap (actual, predictions)
Parameters:
seq<bool>
predictions : seq<float>
Returns: (float * ComparisonMetrics)[]
|
|
Full Usage:
ComparisonMetrics.binaryThresholdMap (actual, predictions, thresholds)
Parameters:
seq<bool>
predictions : seq<float>
thresholds : seq<float>
Returns: (float * ComparisonMetrics)[]
|
|
Full Usage:
ComparisonMetrics.binaryThresholdMap tm
Parameters:
(float * BinaryConfusionMatrix)[]
Returns: (float * ComparisonMetrics)[]
|
|
Full Usage:
ComparisonMetrics.calculateAccuracy tp tn samplesize
Parameters:
float
tn : float
samplesize : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateBalancedAccuracy tp p tn n
Parameters:
float
p : float
tn : float
n : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateDiagnosticOddsRatio tp tn fp fn p n
Parameters:
float
tn : float
fp : float
fn : float
p : float
n : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateF1 tp fp fn
Parameters:
float
fp : float
fn : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateFallOut fp n
Parameters:
float
n : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateFalseDiscoveryRate fp tp
Parameters:
float
tp : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateFalseOmissionRate fn tn
Parameters:
float
tn : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateFowlkesMallowsIndex tp fp p
Parameters:
float
fp : float
p : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateInformedness tp p tn n
Parameters:
float
p : float
tn : float
n : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateMarkedness tp fp tn fn
Parameters:
float
fp : float
tn : float
fn : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateMissrate fn p
Parameters:
float
p : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateMultiLabelROC (actual, predictions)
Parameters:
'a[]
predictions : ('a * float[])[]
Returns: Map<string, (float * float)[]>
|
|
Full Usage:
ComparisonMetrics.calculateNegativeLikelihoodRatio fn p tn n
Parameters:
float
p : float
tn : float
n : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateNegativePredictiveValue tn fn
Parameters:
float
fn : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculatePhiCoefficient tp tn fp fn
Parameters:
float
tn : float
fp : float
fn : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculatePositiveLikelihoodRatio tp p fp n
Parameters:
float
p : float
fp : float
n : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculatePrecision tp fp
Parameters:
float
fp : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculatePrevalence p samplesize
Parameters:
float
samplesize : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculatePrevalenceThreshold fp n tp p
Parameters:
float
n : float
tp : float
p : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateROC (actual, predictions)
Parameters:
seq<bool>
predictions : seq<float>
Returns: (float * float)[]
|
|
Full Usage:
ComparisonMetrics.calculateROC (actual, predictions, thresholds)
Parameters:
seq<bool>
predictions : seq<float>
thresholds : seq<float>
Returns: (float * float)[]
|
|
Full Usage:
ComparisonMetrics.calculateSensitivity tp p
Parameters:
float
p : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateSpecificity tn n
Parameters:
float
n : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.calculateThreatScore tp fn fp
Parameters:
float
fn : float
fp : float
Returns: float
|
|
Full Usage:
ComparisonMetrics.create bcm
Parameters:
BinaryConfusionMatrix
Returns: ComparisonMetrics
|
|
Full Usage:
ComparisonMetrics.create (tp, tn, fp, fn)
Parameters:
float
tn : float
fp : float
fn : float
Returns: ComparisonMetrics
|
|
Full Usage:
ComparisonMetrics.create (p, n, samplesize, tp, tn, fp, fn, sensitivity, specificity, precision, negativepredictivevalue, missrate, fallout, falsediscoveryrate, falseomissionrate, positivelikelihoodratio, negativelikelihoodratio, prevalencethreshold, threatscore, prevalence, accuracy, balancedaccuracy, f1, phicoefficient, fowlkesmallowsindex, informedness, markedness, diagnosticoddsratio)
Parameters:
float
n : float
samplesize : float
tp : float
tn : float
fp : float
fn : float
sensitivity : float
specificity : float
precision : float
negativepredictivevalue : float
missrate : float
fallout : float
falsediscoveryrate : float
falseomissionrate : float
positivelikelihoodratio : float
negativelikelihoodratio : float
prevalencethreshold : float
threatscore : float
prevalence : float
accuracy : float
balancedaccuracy : float
f1 : float
phicoefficient : float
fowlkesmallowsindex : float
informedness : float
markedness : float
diagnosticoddsratio : float
Returns: ComparisonMetrics
|
|
Full Usage:
ComparisonMetrics.macroAverage bcms
Parameters:
BinaryConfusionMatrix[]
Returns: ComparisonMetrics
|
|
Full Usage:
ComparisonMetrics.macroAverage mlcm
Parameters:
MultiLabelConfusionMatrix
Returns: ComparisonMetrics
|
|
Full Usage:
ComparisonMetrics.macroAverage metrics
Parameters:
seq<ComparisonMetrics>
Returns: ComparisonMetrics
|
calculates comparison metrics as macro average of the given sequence of comparison metrics (all metrics are calculated as the average of the respective metrics)
|
Full Usage:
ComparisonMetrics.macroAverageOfMultiLabelPredictions (labels, actual, predictions)
Parameters:
'a[]
actual : 'a[]
predictions : 'a[]
Returns: ComparisonMetrics
|
|
Full Usage:
ComparisonMetrics.microAverage mlcm
Parameters:
MultiLabelConfusionMatrix
Returns: ComparisonMetrics
|
calculates comparison metrics from the given MultiLabelConfusionMatrix as micro-averages (one-vs-rest binary confusion matrices (TP/TN/FP/FN) are calculated for each label and then aggregated before calculating metrics)
|
Full Usage:
ComparisonMetrics.microAverage cms
Parameters:
seq<BinaryConfusionMatrix>
Returns: ComparisonMetrics
|
calculates comparison metrics from multiple binary confusion matrices as micro-averages (all TP/TN/FP/FN are aggregated before calculating metrics)
|
Full Usage:
ComparisonMetrics.microAverageOfMultiLabelPredictions (labels, actual, predictions)
Parameters:
'a[]
actual : 'a[]
predictions : 'a[]
Returns: ComparisonMetrics
|
|
Full Usage:
ComparisonMetrics.multiLabelThresholdMap (actual, predictions)
Parameters:
'a[]
predictions : ('a * float[])[]
Returns: Map<string, (float * ComparisonMetrics)[]>
|
|
Full Usage:
ComparisonMetrics.ofBinaryPredictions (actual, predictions)
Parameters:
seq<bool>
predictions : seq<bool>
Returns: ComparisonMetrics
|
|
Full Usage:
ComparisonMetrics.ofBinaryPredictions (positiveLabel, actual, predictions)
Parameters:
'A
actual : seq<'A>
predictions : seq<'A>
Returns: ComparisonMetrics
|
|