Logo Deedle

Series and time series features

This page covers features for working with ordered series and time series — alignment, windowing, chunking, resampling, and arithmetic operators. For basic series and frame operations, see the quick start. For frame-specific features, see data frames.

Sample data

We generate random stock-like prices to demonstrate time series operations. The function below uses geometric Brownian motion:

/// Generate random prices with geometric Brownian motion
let randomPrice seed drift volatility initial (start: DateTimeOffset) (span: TimeSpan) count = 
  let dist = Normal(0.0, 1.0, RandomSource=Random(seed))  
  let dt = span.TotalDays / 250.0
  let driftExp = (drift - 0.5 * pown volatility 2) * dt
  let randExp = volatility * (sqrt dt)
  (start, initial) |> Seq.unfold (fun (dt, price) ->
    let price = price * exp (driftExp + randExp * dist.Sample()) 
    Some((dt, price), (dt + span, price))) |> Seq.take count

let today = DateTimeOffset(DateTime.Today)
let stock1 = randomPrice 1 0.1 3.0 20.0 today 
let stock2 = randomPrice 2 0.2 1.5 22.0 today
val randomPrice:
  seed: int ->
    drift: float ->
    volatility: float ->
    initial: float ->
    start: DateTimeOffset ->
    span: TimeSpan -> count: int -> (DateTimeOffset * float) seq
val today: DateTimeOffset = 05/09/2026 00:00:00 +00:00
val stock1: (TimeSpan -> int -> (DateTimeOffset * float) seq)
val stock2: (TimeSpan -> int -> (DateTimeOffset * float) seq)

Call stock1 or stock2 with a TimeSpan and count to get prices at different intervals.

Alignment and zipping

A key time series feature is automatic alignment — combining series with different keys, matching by key or nearest available value.

// Hourly, half-hourly, and 65-minute series
let s1 = stock1 (TimeSpan(1, 0, 0)) 6 |> series
let s2 = stock2 (TimeSpan(0, 30, 0)) 12 |> series
let s3 = stock1 (TimeSpan(1, 5, 0)) 6 |> series
val s1: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.754973736875044 
05/09/2026 01:00:00 +00:00 -> 19.597848006247823 
05/09/2026 02:00:00 +00:00 -> 21.082236720927117 
05/09/2026 03:00:00 +00:00 -> 20.934518683841578 
05/09/2026 04:00:00 +00:00 -> 20.306154004026492 
05/09/2026 05:00:00 +00:00 -> 21.34621921893587  

val s2: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 22.424874209559682 
05/09/2026 00:30:00 +00:00 -> 22.22064239460307  
05/09/2026 01:00:00 +00:00 -> 22.641109750713557 
05/09/2026 01:30:00 +00:00 -> 22.56349501269532  
05/09/2026 02:00:00 +00:00 -> 22.320650673323108 
05/09/2026 02:30:00 +00:00 -> 22.55690209487088  
05/09/2026 03:00:00 +00:00 -> 22.62851382820692  
05/09/2026 03:30:00 +00:00 -> 22.43493432725331  
05/09/2026 04:00:00 +00:00 -> 22.163775506094186 
05/09/2026 04:30:00 +00:00 -> 22.226264582999793 
05/09/2026 05:00:00 +00:00 -> 22.036685215111863 
05/09/2026 05:30:00 +00:00 -> 21.907097475404807 

val s3: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.744417239564633 
05/09/2026 01:05:00 +00:00 -> 19.580379327241104 
05/09/2026 02:10:00 +00:00 -> 21.12567580554974  
05/09/2026 03:15:00 +00:00 -> 20.970977681224515 
05/09/2026 04:20:00 +00:00 -> 20.315588178436663 
05/09/2026 05:25:00 +00:00 -> 21.39907281999011

Zipping series

Zip combines two series into a series of pairs, with configurable alignment:

// Match values from right series to keys of the left one
s1.Zip(s2, JoinKind.Left)

// Match values from the left series to keys of the right one
s1.Zip(s2, JoinKind.Right)

// Use left series key and find the nearest previous value from the right series
s1.Zip(s2, JoinKind.Left, Lookup.ExactOrSmaller)

Joining data frames

The same alignment works at frame level via Join:

// Contains value for each hour
let f1 = Frame.ofColumns ["S1" => s1]
// Contains value every 30 minutes
let f2 = Frame.ofColumns ["S2" => s2]
// Contains values with 65 minute offsets
let f3 = Frame.ofColumns ["S3" => s3]
val f1: Frame<DateTimeOffset,string> =
  
                              S1                 
05/09/2026 00:00:00 +00:00 -> 19.754973736875044 
05/09/2026 01:00:00 +00:00 -> 19.597848006247823 
05/09/2026 02:00:00 +00:00 -> 21.082236720927117 
05/09/2026 03:00:00 +00:00 -> 20.934518683841578 
05/09/2026 04:00:00 +00:00 -> 20.306154004026492 
05/09/2026 05:00:00 +00:00 -> 21.34621921893587  

val f2: Frame<DateTimeOffset,string> =
  
                              S2                 
05/09/2026 00:00:00 +00:00 -> 22.424874209559682 
05/09/2026 00:30:00 +00:00 -> 22.22064239460307  
05/09/2026 01:00:00 +00:00 -> 22.641109750713557 
05/09/2026 01:30:00 +00:00 -> 22.56349501269532  
05/09/2026 02:00:00 +00:00 -> 22.320650673323108 
05/09/2026 02:30:00 +00:00 -> 22.55690209487088  
05/09/2026 03:00:00 +00:00 -> 22.62851382820692  
05/09/2026 03:30:00 +00:00 -> 22.43493432725331  
05/09/2026 04:00:00 +00:00 -> 22.163775506094186 
05/09/2026 04:30:00 +00:00 -> 22.226264582999793 
05/09/2026 05:00:00 +00:00 -> 22.036685215111863 
05/09/2026 05:30:00 +00:00 -> 21.907097475404807 

val f3: Frame<DateTimeOffset,string> =
  
                              S3                 
05/09/2026 00:00:00 +00:00 -> 19.744417239564633 
05/09/2026 01:05:00 +00:00 -> 19.580379327241104 
05/09/2026 02:10:00 +00:00 -> 21.12567580554974  
05/09/2026 03:15:00 +00:00 -> 20.970977681224515 
05/09/2026 04:20:00 +00:00 -> 20.315588178436663 
05/09/2026 05:25:00 +00:00 -> 21.39907281999011

Union keys from both frames and align corresponding values

f1.Join(f2, JoinKind.Outer)
val it: Frame<DateTimeOffset,string> =
  
                              S1                 S2                 
05/09/2026 00:00:00 +00:00 -> 19.754973736875044 22.424874209559682 
05/09/2026 00:30:00 +00:00 -> <missing>          22.22064239460307  
05/09/2026 01:00:00 +00:00 -> 19.597848006247823 22.641109750713557 
05/09/2026 01:30:00 +00:00 -> <missing>          22.56349501269532  
05/09/2026 02:00:00 +00:00 -> 21.082236720927117 22.320650673323108 
05/09/2026 02:30:00 +00:00 -> <missing>          22.55690209487088  
05/09/2026 03:00:00 +00:00 -> 20.934518683841578 22.62851382820692  
05/09/2026 03:30:00 +00:00 -> <missing>          22.43493432725331  
05/09/2026 04:00:00 +00:00 -> 20.306154004026492 22.163775506094186 
05/09/2026 04:30:00 +00:00 -> <missing>          22.226264582999793 
05/09/2026 05:00:00 +00:00 -> 21.34621921893587  22.036685215111863 
05/09/2026 05:30:00 +00:00 -> <missing>          21.907097475404807

Take only keys where both frames contain all values

f2.Join(f3, JoinKind.Inner)

Take keys from the left frame and find nearest smaller value from the right frame

f2.Join(f3, JoinKind.Left, Lookup.ExactOrSmaller)
val it: Frame<DateTimeOffset,string> =
  
                              S2                 S3                 
05/09/2026 00:00:00 +00:00 -> 22.424874209559682 19.744417239564633 
05/09/2026 00:30:00 +00:00 -> 22.22064239460307  19.744417239564633 
05/09/2026 01:00:00 +00:00 -> 22.641109750713557 19.744417239564633 
05/09/2026 01:30:00 +00:00 -> 22.56349501269532  19.580379327241104 
05/09/2026 02:00:00 +00:00 -> 22.320650673323108 19.580379327241104 
05/09/2026 02:30:00 +00:00 -> 22.55690209487088  21.12567580554974  
05/09/2026 03:00:00 +00:00 -> 22.62851382820692  21.12567580554974  
05/09/2026 03:30:00 +00:00 -> 22.43493432725331  20.970977681224515 
05/09/2026 04:00:00 +00:00 -> 22.163775506094186 20.970977681224515 
05/09/2026 04:30:00 +00:00 -> 22.226264582999793 20.315588178436663 
05/09/2026 05:00:00 +00:00 -> 22.036685215111863 20.315588178436663 
05/09/2026 05:30:00 +00:00 -> 21.907097475404807 21.39907281999011
// Function syntax equivalents
Frame.join JoinKind.Outer f1 f2
Frame.joinAlign JoinKind.Left Lookup.ExactOrSmaller f1 f2

For more on joins, see Joining and merging.

Windowing, chunking, and pairwise

These operations aggregate consecutive elements — unlike grouping, they rely on ordering.

Sliding windows

// Create input series with 6 observations
let lf = stock1 (TimeSpan(0, 1, 0)) 6 |> series

// Sliding windows of size 4
lf |> Series.window 4
val lf: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.969843648835873 
05/09/2026 00:01:00 +00:00 -> 19.950911695878382 
05/09/2026 00:02:00 +00:00 -> 20.1415124102494   
05/09/2026 00:03:00 +00:00 -> 20.12489645313076  
05/09/2026 00:04:00 +00:00 -> 20.04752630081334  
05/09/2026 00:05:00 +00:00 -> 20.17888621521063  

val it: Series<DateTimeOffset,Series<DateTimeOffset,float>> =
  
05/09/2026 00:03:00 +00:00 -> series [ 05/09/2026 00:00:00 +00:00 => 19.96984... 
05/09/2026 00:04:00 +00:00 -> series [ 05/09/2026 00:01:00 +00:00 => 19.95091... 
05/09/2026 00:05:00 +00:00 -> series [ 05/09/2026 00:02:00 +00:00 => 20.14151...
// Aggregate each window
lf |> Series.windowInto 4 Stats.mean
val it: Series<DateTimeOffset,float> =
  
05/09/2026 00:03:00 +00:00 -> 20.046791052023604 
05/09/2026 00:04:00 +00:00 -> 20.06621171501797  
05/09/2026 00:05:00 +00:00 -> 20.123205344851034
// First value of each window
lf |> Series.windowInto 4 Series.firstValue
val it: Series<DateTimeOffset,float> =
  
05/09/2026 00:03:00 +00:00 -> 19.969843648835873 
05/09/2026 00:04:00 +00:00 -> 19.950911695878382 
05/09/2026 00:05:00 +00:00 -> 20.1415124102494

Given input [1,2,3,4,5,6], windows of size 4 produce: [1,2,3,4], [2,3,4,5], [3,4,5,6].

Performance tip: For rolling statistics, prefer the dedicated Stats.moving* functions which use O(n) online algorithms instead of materialising each window:

lf |> Stats.movingMean 4     // fast
lf |> Series.windowInto 4 Stats.mean  // slow

See Statistics for the full list.

Incomplete windows at the boundary avoid missing values:

let lfm2 = 
  // Create sliding windows with incomplete windows at the beginning
  lf |> Series.windowSizeInto (4, Boundary.AtBeginning) (fun ds ->
    Stats.mean ds.Data)

Frame.ofColumns [ "Orig" => lf; "Means" => lfm2 ]
val lfm2: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.969843648835873 
05/09/2026 00:01:00 +00:00 -> 19.960377672357126 
05/09/2026 00:02:00 +00:00 -> 20.020755918321218 
05/09/2026 00:03:00 +00:00 -> 20.046791052023604 
05/09/2026 00:04:00 +00:00 -> 20.06621171501797  
05/09/2026 00:05:00 +00:00 -> 20.123205344851034 

val it: Frame<DateTimeOffset,string> =
  
                              Orig               Means              
05/09/2026 00:00:00 +00:00 -> 19.969843648835873 19.969843648835873 
05/09/2026 00:01:00 +00:00 -> 19.950911695878382 19.960377672357126 
05/09/2026 00:02:00 +00:00 -> 20.1415124102494   20.020755918321218 
05/09/2026 00:03:00 +00:00 -> 20.12489645313076  20.046791052023604 
05/09/2026 00:04:00 +00:00 -> 20.04752630081334  20.06621171501797  
05/09/2026 00:05:00 +00:00 -> 20.17888621521063  20.123205344851034

The DataSegment<T> type tells you whether a window is Complete or Incomplete:

// Simple series with characters
let st = Series.ofValues [ 'a' .. 'e' ]
val st: Series<int,char> = 
0 -> a 
1 -> b 
2 -> c 
3 -> d 
4 -> e
st |> Series.windowSizeInto (3, Boundary.AtEnding) (function
  | DataSegment.Complete(ser) -> 
      // Return complete windows as uppercase strings
      String(ser |> Series.values |> Array.ofSeq).ToUpper()
  | DataSegment.Incomplete(ser) -> 
      // Return incomplete windows as padded lowercase strings
      String(ser |> Series.values |> Array.ofSeq).PadRight(3, '-') )  
val it: Series<int,string> =
  
0 -> ABC 
1 -> BCD 
2 -> CDE 
3 -> de- 
4 -> e--

Window size conditions

Windows can also end based on key distance or a predicate:

// Generate prices for each hour over 30 days
let hourly = stock1 (TimeSpan(1, 0, 0)) (30*24) |> series
val hourly: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.754973736875044 
05/09/2026 01:00:00 +00:00 -> 19.597848006247823 
05/09/2026 02:00:00 +00:00 -> 21.082236720927117 
05/09/2026 03:00:00 +00:00 -> 20.934518683841578 
05/09/2026 04:00:00 +00:00 -> 20.306154004026492 
05/09/2026 05:00:00 +00:00 -> 21.34621921893587  
05/09/2026 06:00:00 +00:00 -> 20.13623022607118  
05/09/2026 07:00:00 +00:00 -> 20.964411338110917 
05/09/2026 08:00:00 +00:00 -> 20.085441882388306 
05/09/2026 09:00:00 +00:00 -> 20.42922351154321  
05/09/2026 10:00:00 +00:00 -> 21.214741111696384 
05/09/2026 11:00:00 +00:00 -> 21.572477527580084 
05/09/2026 12:00:00 +00:00 -> 22.99716413822758  
05/09/2026 13:00:00 +00:00 -> 24.401127449884047 
05/09/2026 14:00:00 +00:00 -> 23.735809981259326 
...                        -> ...                
06/07/2026 09:00:00 +00:00 -> 16.960827204255835 
06/07/2026 10:00:00 +00:00 -> 16.301755083271065 
06/07/2026 11:00:00 +00:00 -> 16.12107447346734  
06/07/2026 12:00:00 +00:00 -> 15.755531867476078 
06/07/2026 13:00:00 +00:00 -> 15.915184711166045 
06/07/2026 14:00:00 +00:00 -> 15.469260998634388 
06/07/2026 15:00:00 +00:00 -> 15.453008413249506 
06/07/2026 16:00:00 +00:00 -> 14.272612513596362 
06/07/2026 17:00:00 +00:00 -> 13.66800293915071  
06/07/2026 18:00:00 +00:00 -> 12.832238020275195 
06/07/2026 19:00:00 +00:00 -> 11.686555381834928 
06/07/2026 20:00:00 +00:00 -> 11.54977517938337  
06/07/2026 21:00:00 +00:00 -> 11.357197331201002 
06/07/2026 22:00:00 +00:00 -> 10.930088874293963 
06/07/2026 23:00:00 +00:00 -> 10.880499730141674
// Generate windows of size 1 day
hourly |> Series.windowDist (TimeSpan(24, 0, 0))
val it: Series<DateTimeOffset,Series<DateTimeOffset,float>> =
  
05/09/2026 00:00:00 +00:00 -> series [ 05/09/2026 00:00:00 +00:00 => 19.75497... 
05/09/2026 01:00:00 +00:00 -> series [ 05/09/2026 01:00:00 +00:00 => 19.59784... 
05/09/2026 02:00:00 +00:00 -> series [ 05/09/2026 02:00:00 +00:00 => 21.08223... 
05/09/2026 03:00:00 +00:00 -> series [ 05/09/2026 03:00:00 +00:00 => 20.93451... 
05/09/2026 04:00:00 +00:00 -> series [ 05/09/2026 04:00:00 +00:00 => 20.30615... 
05/09/2026 05:00:00 +00:00 -> series [ 05/09/2026 05:00:00 +00:00 => 21.34621... 
05/09/2026 06:00:00 +00:00 -> series [ 05/09/2026 06:00:00 +00:00 => 20.13623... 
05/09/2026 07:00:00 +00:00 -> series [ 05/09/2026 07:00:00 +00:00 => 20.96441... 
05/09/2026 08:00:00 +00:00 -> series [ 05/09/2026 08:00:00 +00:00 => 20.08544... 
05/09/2026 09:00:00 +00:00 -> series [ 05/09/2026 09:00:00 +00:00 => 20.42922... 
05/09/2026 10:00:00 +00:00 -> series [ 05/09/2026 10:00:00 +00:00 => 21.21474... 
05/09/2026 11:00:00 +00:00 -> series [ 05/09/2026 11:00:00 +00:00 => 21.57247... 
05/09/2026 12:00:00 +00:00 -> series [ 05/09/2026 12:00:00 +00:00 => 22.99716... 
05/09/2026 13:00:00 +00:00 -> series [ 05/09/2026 13:00:00 +00:00 => 24.40112... 
05/09/2026 14:00:00 +00:00 -> series [ 05/09/2026 14:00:00 +00:00 => 23.73580... 
...                        -> ...                                                
06/07/2026 09:00:00 +00:00 -> series [ 06/07/2026 09:00:00 +00:00 => 16.96082... 
06/07/2026 10:00:00 +00:00 -> series [ 06/07/2026 10:00:00 +00:00 => 16.30175... 
06/07/2026 11:00:00 +00:00 -> series [ 06/07/2026 11:00:00 +00:00 => 16.12107... 
06/07/2026 12:00:00 +00:00 -> series [ 06/07/2026 12:00:00 +00:00 => 15.75553... 
06/07/2026 13:00:00 +00:00 -> series [ 06/07/2026 13:00:00 +00:00 => 15.91518... 
06/07/2026 14:00:00 +00:00 -> series [ 06/07/2026 14:00:00 +00:00 => 15.46926... 
06/07/2026 15:00:00 +00:00 -> series [ 06/07/2026 15:00:00 +00:00 => 15.45300... 
06/07/2026 16:00:00 +00:00 -> series [ 06/07/2026 16:00:00 +00:00 => 14.27261... 
06/07/2026 17:00:00 +00:00 -> series [ 06/07/2026 17:00:00 +00:00 => 13.66800... 
06/07/2026 18:00:00 +00:00 -> series [ 06/07/2026 18:00:00 +00:00 => 12.83223... 
06/07/2026 19:00:00 +00:00 -> series [ 06/07/2026 19:00:00 +00:00 => 11.68655... 
06/07/2026 20:00:00 +00:00 -> series [ 06/07/2026 20:00:00 +00:00 => 11.54977... 
06/07/2026 21:00:00 +00:00 -> series [ 06/07/2026 21:00:00 +00:00 => 11.35719... 
06/07/2026 22:00:00 +00:00 -> series [ 06/07/2026 22:00:00 +00:00 => 10.93008... 
06/07/2026 23:00:00 +00:00 -> series [ 06/07/2026 23:00:00 +00:00 => 10.88049...
// Generate windows such that date in each window is the same
hourly |> Series.windowWhile (fun d1 d2 -> d1.Date = d2.Date)
val it: Series<DateTimeOffset,Series<DateTimeOffset,float>> =
  
05/09/2026 00:00:00 +00:00 -> series [ 05/09/2026 00:00:00 +00:00 => 19.75497... 
05/09/2026 01:00:00 +00:00 -> series [ 05/09/2026 01:00:00 +00:00 => 19.59784... 
05/09/2026 02:00:00 +00:00 -> series [ 05/09/2026 02:00:00 +00:00 => 21.08223... 
05/09/2026 03:00:00 +00:00 -> series [ 05/09/2026 03:00:00 +00:00 => 20.93451... 
05/09/2026 04:00:00 +00:00 -> series [ 05/09/2026 04:00:00 +00:00 => 20.30615... 
05/09/2026 05:00:00 +00:00 -> series [ 05/09/2026 05:00:00 +00:00 => 21.34621... 
05/09/2026 06:00:00 +00:00 -> series [ 05/09/2026 06:00:00 +00:00 => 20.13623... 
05/09/2026 07:00:00 +00:00 -> series [ 05/09/2026 07:00:00 +00:00 => 20.96441... 
05/09/2026 08:00:00 +00:00 -> series [ 05/09/2026 08:00:00 +00:00 => 20.08544... 
05/09/2026 09:00:00 +00:00 -> series [ 05/09/2026 09:00:00 +00:00 => 20.42922... 
05/09/2026 10:00:00 +00:00 -> series [ 05/09/2026 10:00:00 +00:00 => 21.21474... 
05/09/2026 11:00:00 +00:00 -> series [ 05/09/2026 11:00:00 +00:00 => 21.57247... 
05/09/2026 12:00:00 +00:00 -> series [ 05/09/2026 12:00:00 +00:00 => 22.99716... 
05/09/2026 13:00:00 +00:00 -> series [ 05/09/2026 13:00:00 +00:00 => 24.40112... 
05/09/2026 14:00:00 +00:00 -> series [ 05/09/2026 14:00:00 +00:00 => 23.73580... 
...                        -> ...                                                
06/07/2026 09:00:00 +00:00 -> series [ 06/07/2026 09:00:00 +00:00 => 16.96082... 
06/07/2026 10:00:00 +00:00 -> series [ 06/07/2026 10:00:00 +00:00 => 16.30175... 
06/07/2026 11:00:00 +00:00 -> series [ 06/07/2026 11:00:00 +00:00 => 16.12107... 
06/07/2026 12:00:00 +00:00 -> series [ 06/07/2026 12:00:00 +00:00 => 15.75553... 
06/07/2026 13:00:00 +00:00 -> series [ 06/07/2026 13:00:00 +00:00 => 15.91518... 
06/07/2026 14:00:00 +00:00 -> series [ 06/07/2026 14:00:00 +00:00 => 15.46926... 
06/07/2026 15:00:00 +00:00 -> series [ 06/07/2026 15:00:00 +00:00 => 15.45300... 
06/07/2026 16:00:00 +00:00 -> series [ 06/07/2026 16:00:00 +00:00 => 14.27261... 
06/07/2026 17:00:00 +00:00 -> series [ 06/07/2026 17:00:00 +00:00 => 13.66800... 
06/07/2026 18:00:00 +00:00 -> series [ 06/07/2026 18:00:00 +00:00 => 12.83223... 
06/07/2026 19:00:00 +00:00 -> series [ 06/07/2026 19:00:00 +00:00 => 11.68655... 
06/07/2026 20:00:00 +00:00 -> series [ 06/07/2026 20:00:00 +00:00 => 11.54977... 
06/07/2026 21:00:00 +00:00 -> series [ 06/07/2026 21:00:00 +00:00 => 11.35719... 
06/07/2026 22:00:00 +00:00 -> series [ 06/07/2026 22:00:00 +00:00 => 10.93008... 
06/07/2026 23:00:00 +00:00 -> series [ 06/07/2026 23:00:00 +00:00 => 10.88049...

Chunking

Chunking creates non-overlapping groups (unlike overlapping windows):

// Generate per-second observations over 10 minutes
let hf = stock1 (TimeSpan(0, 0, 1)) 600 |> series
val hf: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.996131751457803 
05/09/2026 00:00:01 +00:00 -> 19.993710893454576 
05/09/2026 00:00:02 +00:00 -> 20.01829565788732  
05/09/2026 00:00:03 +00:00 -> 20.016190409405294 
05/09/2026 00:00:04 +00:00 -> 20.00626673993418  
05/09/2026 00:00:05 +00:00 -> 20.02316975436215  
05/09/2026 00:00:06 +00:00 -> 20.003945787695436 
05/09/2026 00:00:07 +00:00 -> 20.01762873716283  
05/09/2026 00:00:08 +00:00 -> 20.00358465274902  
05/09/2026 00:00:09 +00:00 -> 20.009483994075556 
05/09/2026 00:00:10 +00:00 -> 20.022311176857812 
05/09/2026 00:00:11 +00:00 -> 20.028132892443132 
05/09/2026 00:00:12 +00:00 -> 20.04973275832668  
05/09/2026 00:00:13 +00:00 -> 20.06978567526692  
05/09/2026 00:00:14 +00:00 -> 20.06078193122878  
...                        -> ...                
05/09/2026 00:09:45 +00:00 -> 20.16045114780235  
05/09/2026 00:09:46 +00:00 -> 20.165283063793627 
05/09/2026 00:09:47 +00:00 -> 20.153068805371984 
05/09/2026 00:09:48 +00:00 -> 20.15348225510314  
05/09/2026 00:09:49 +00:00 -> 20.15835165255014  
05/09/2026 00:09:50 +00:00 -> 20.159557667970457 
05/09/2026 00:09:51 +00:00 -> 20.162019745850852 
05/09/2026 00:09:52 +00:00 -> 20.15060968604537  
05/09/2026 00:09:53 +00:00 -> 20.162484145461548 
05/09/2026 00:09:54 +00:00 -> 20.145898520345263 
05/09/2026 00:09:55 +00:00 -> 20.13480888388234  
05/09/2026 00:09:56 +00:00 -> 20.147579106369    
05/09/2026 00:09:57 +00:00 -> 20.14443009043118  
05/09/2026 00:09:58 +00:00 -> 20.16318320755618  
05/09/2026 00:09:59 +00:00 -> 20.145258702522874
// Create 10 second chunks with (possible) incomplete chunk at the end
hf |> Series.chunkSize (10, Boundary.AtEnding) 
val it: Series<DateTimeOffset,Series<DateTimeOffset,float>> =
  
05/09/2026 00:00:00 +00:00 -> series [ 05/09/2026 00:00:00 +00:00 => 19.99613... 
05/09/2026 00:00:10 +00:00 -> series [ 05/09/2026 00:00:10 +00:00 => 20.02231... 
05/09/2026 00:00:20 +00:00 -> series [ 05/09/2026 00:00:20 +00:00 => 20.05968... 
05/09/2026 00:00:30 +00:00 -> series [ 05/09/2026 00:00:30 +00:00 => 20.02559... 
05/09/2026 00:00:40 +00:00 -> series [ 05/09/2026 00:00:40 +00:00 => 20.02668... 
05/09/2026 00:00:50 +00:00 -> series [ 05/09/2026 00:00:50 +00:00 => 20.05368... 
05/09/2026 00:01:00 +00:00 -> series [ 05/09/2026 00:01:00 +00:00 => 20.04482... 
05/09/2026 00:01:10 +00:00 -> series [ 05/09/2026 00:01:10 +00:00 => 20.00499... 
05/09/2026 00:01:20 +00:00 -> series [ 05/09/2026 00:01:20 +00:00 => 19.99957... 
05/09/2026 00:01:30 +00:00 -> series [ 05/09/2026 00:01:30 +00:00 => 19.96103... 
05/09/2026 00:01:40 +00:00 -> series [ 05/09/2026 00:01:40 +00:00 => 19.96884... 
05/09/2026 00:01:50 +00:00 -> series [ 05/09/2026 00:01:50 +00:00 => 20.00778... 
05/09/2026 00:02:00 +00:00 -> series [ 05/09/2026 00:02:00 +00:00 => 19.98829... 
05/09/2026 00:02:10 +00:00 -> series [ 05/09/2026 00:02:10 +00:00 => 19.96594... 
05/09/2026 00:02:20 +00:00 -> series [ 05/09/2026 00:02:20 +00:00 => 19.89602... 
...                        -> ...                                                
05/09/2026 00:07:30 +00:00 -> series [ 05/09/2026 00:07:30 +00:00 => 20.19948... 
05/09/2026 00:07:40 +00:00 -> series [ 05/09/2026 00:07:40 +00:00 => 20.15853... 
05/09/2026 00:07:50 +00:00 -> series [ 05/09/2026 00:07:50 +00:00 => 20.11420... 
05/09/2026 00:08:00 +00:00 -> series [ 05/09/2026 00:08:00 +00:00 => 20.08796... 
05/09/2026 00:08:10 +00:00 -> series [ 05/09/2026 00:08:10 +00:00 => 20.02012... 
05/09/2026 00:08:20 +00:00 -> series [ 05/09/2026 00:08:20 +00:00 => 20.07362... 
05/09/2026 00:08:30 +00:00 -> series [ 05/09/2026 00:08:30 +00:00 => 20.06151... 
05/09/2026 00:08:40 +00:00 -> series [ 05/09/2026 00:08:40 +00:00 => 20.09667... 
05/09/2026 00:08:50 +00:00 -> series [ 05/09/2026 00:08:50 +00:00 => 20.12169... 
05/09/2026 00:09:00 +00:00 -> series [ 05/09/2026 00:09:00 +00:00 => 20.12236... 
05/09/2026 00:09:10 +00:00 -> series [ 05/09/2026 00:09:10 +00:00 => 20.16369... 
05/09/2026 00:09:20 +00:00 -> series [ 05/09/2026 00:09:20 +00:00 => 20.20980... 
05/09/2026 00:09:30 +00:00 -> series [ 05/09/2026 00:09:30 +00:00 => 20.21533... 
05/09/2026 00:09:40 +00:00 -> series [ 05/09/2026 00:09:40 +00:00 => 20.15532... 
05/09/2026 00:09:50 +00:00 -> series [ 05/09/2026 00:09:50 +00:00 => 20.15955...
// Create 10 second chunks and get the first observation for each (downsample)
hf |> Series.chunkDistInto (TimeSpan(0, 0, 10)) Series.firstValue
val it: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.996131751457803 
05/09/2026 00:00:10 +00:00 -> 20.022311176857812 
05/09/2026 00:00:20 +00:00 -> 20.059686941436766 
05/09/2026 00:00:30 +00:00 -> 20.02559178179729  
05/09/2026 00:00:40 +00:00 -> 20.02668692627948  
05/09/2026 00:00:50 +00:00 -> 20.05368938339191  
05/09/2026 00:01:00 +00:00 -> 20.044829024898025 
05/09/2026 00:01:10 +00:00 -> 20.004996757436338 
05/09/2026 00:01:20 +00:00 -> 19.99957043464383  
05/09/2026 00:01:30 +00:00 -> 19.961039960188018 
05/09/2026 00:01:40 +00:00 -> 19.968849122406482 
05/09/2026 00:01:50 +00:00 -> 20.007784628281033 
05/09/2026 00:02:00 +00:00 -> 19.98829121741114  
05/09/2026 00:02:10 +00:00 -> 19.965949451321062 
05/09/2026 00:02:20 +00:00 -> 19.896028233111405 
...                        -> ...                
05/09/2026 00:07:30 +00:00 -> 20.199480997431245 
05/09/2026 00:07:40 +00:00 -> 20.158538419225742 
05/09/2026 00:07:50 +00:00 -> 20.11420707050759  
05/09/2026 00:08:00 +00:00 -> 20.08796000817225  
05/09/2026 00:08:10 +00:00 -> 20.020124487888907 
05/09/2026 00:08:20 +00:00 -> 20.07362791974513  
05/09/2026 00:08:30 +00:00 -> 20.061518901217816 
05/09/2026 00:08:40 +00:00 -> 20.09667229172206  
05/09/2026 00:08:50 +00:00 -> 20.121699006002025 
05/09/2026 00:09:00 +00:00 -> 20.122366578651494 
05/09/2026 00:09:10 +00:00 -> 20.163696085816166 
05/09/2026 00:09:20 +00:00 -> 20.20980887957206  
05/09/2026 00:09:30 +00:00 -> 20.215331121038176 
05/09/2026 00:09:40 +00:00 -> 20.155326188007578 
05/09/2026 00:09:50 +00:00 -> 20.159557667970457
// Create chunks where hh:mm component is the same
hf |> Series.chunkWhile (fun k1 k2 -> 
  (k1.Hour, k1.Minute) = (k2.Hour, k2.Minute))
val it: Series<DateTimeOffset,Series<DateTimeOffset,float>> =
  
05/09/2026 00:00:00 +00:00 -> series [ 05/09/2026 00:00:00 +00:00 => 19.99613... 
05/09/2026 00:01:00 +00:00 -> series [ 05/09/2026 00:01:00 +00:00 => 20.04482... 
05/09/2026 00:02:00 +00:00 -> series [ 05/09/2026 00:02:00 +00:00 => 19.98829... 
05/09/2026 00:03:00 +00:00 -> series [ 05/09/2026 00:03:00 +00:00 => 19.92228... 
05/09/2026 00:04:00 +00:00 -> series [ 05/09/2026 00:04:00 +00:00 => 19.78909... 
05/09/2026 00:05:00 +00:00 -> series [ 05/09/2026 00:05:00 +00:00 => 19.96644... 
05/09/2026 00:06:00 +00:00 -> series [ 05/09/2026 00:06:00 +00:00 => 19.97280... 
05/09/2026 00:07:00 +00:00 -> series [ 05/09/2026 00:07:00 +00:00 => 20.16527... 
05/09/2026 00:08:00 +00:00 -> series [ 05/09/2026 00:08:00 +00:00 => 20.08796... 
05/09/2026 00:09:00 +00:00 -> series [ 05/09/2026 00:09:00 +00:00 => 20.12236...

Pairwise

Build pairs of consecutive values — useful for computing returns or differences:

// Create a series of pairs from earlier 'hf' input
hf |> Series.pairwise 
val it: Series<DateTimeOffset,(float * float)> =
  
05/09/2026 00:00:01 +00:00 -> (19.996131751457803, 19.993710893454576) 
05/09/2026 00:00:02 +00:00 -> (19.993710893454576, 20.01829565788732)  
05/09/2026 00:00:03 +00:00 -> (20.01829565788732, 20.016190409405294)  
05/09/2026 00:00:04 +00:00 -> (20.016190409405294, 20.00626673993418)  
05/09/2026 00:00:05 +00:00 -> (20.00626673993418, 20.02316975436215)   
05/09/2026 00:00:06 +00:00 -> (20.02316975436215, 20.003945787695436)  
05/09/2026 00:00:07 +00:00 -> (20.003945787695436, 20.01762873716283)  
05/09/2026 00:00:08 +00:00 -> (20.01762873716283, 20.00358465274902)   
05/09/2026 00:00:09 +00:00 -> (20.00358465274902, 20.009483994075556)  
05/09/2026 00:00:10 +00:00 -> (20.009483994075556, 20.022311176857812) 
05/09/2026 00:00:11 +00:00 -> (20.022311176857812, 20.028132892443132) 
05/09/2026 00:00:12 +00:00 -> (20.028132892443132, 20.04973275832668)  
05/09/2026 00:00:13 +00:00 -> (20.04973275832668, 20.06978567526692)   
05/09/2026 00:00:14 +00:00 -> (20.06978567526692, 20.06078193122878)   
05/09/2026 00:00:15 +00:00 -> (20.06078193122878, 20.086710170863764)  
...                        -> ...                                      
05/09/2026 00:09:45 +00:00 -> (20.178540507112952, 20.16045114780235)  
05/09/2026 00:09:46 +00:00 -> (20.16045114780235, 20.165283063793627)  
05/09/2026 00:09:47 +00:00 -> (20.165283063793627, 20.153068805371984) 
05/09/2026 00:09:48 +00:00 -> (20.153068805371984, 20.15348225510314)  
05/09/2026 00:09:49 +00:00 -> (20.15348225510314, 20.15835165255014)   
05/09/2026 00:09:50 +00:00 -> (20.15835165255014, 20.159557667970457)  
05/09/2026 00:09:51 +00:00 -> (20.159557667970457, 20.162019745850852) 
05/09/2026 00:09:52 +00:00 -> (20.162019745850852, 20.15060968604537)  
05/09/2026 00:09:53 +00:00 -> (20.15060968604537, 20.162484145461548)  
05/09/2026 00:09:54 +00:00 -> (20.162484145461548, 20.145898520345263) 
05/09/2026 00:09:55 +00:00 -> (20.145898520345263, 20.13480888388234)  
05/09/2026 00:09:56 +00:00 -> (20.13480888388234, 20.147579106369)     
05/09/2026 00:09:57 +00:00 -> (20.147579106369, 20.14443009043118)     
05/09/2026 00:09:58 +00:00 -> (20.14443009043118, 20.16318320755618)   
05/09/2026 00:09:59 +00:00 -> (20.16318320755618, 20.145258702522874)
// Calculate differences between the current and previous values
hf |> Series.pairwiseWith (fun k (v1, v2) -> v2 - v1)
val it: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:01 +00:00 -> -0.002420858003226556  
05/09/2026 00:00:02 +00:00 -> 0.024584764432745487   
05/09/2026 00:00:03 +00:00 -> -0.002105248482028088  
05/09/2026 00:00:04 +00:00 -> -0.009923669471113783  
05/09/2026 00:00:05 +00:00 -> 0.01690301442797093    
05/09/2026 00:00:06 +00:00 -> -0.019223966666714887  
05/09/2026 00:00:07 +00:00 -> 0.013682949467394678   
05/09/2026 00:00:08 +00:00 -> -0.01404408441381122   
05/09/2026 00:00:09 +00:00 -> 0.005899341326536245   
05/09/2026 00:00:10 +00:00 -> 0.012827182782256585   
05/09/2026 00:00:11 +00:00 -> 0.005821715585319964   
05/09/2026 00:00:12 +00:00 -> 0.02159986588354812    
05/09/2026 00:00:13 +00:00 -> 0.020052916940240806   
05/09/2026 00:00:14 +00:00 -> -0.009003744038139416  
05/09/2026 00:00:15 +00:00 -> 0.025928239634982475   
...                        -> ...                    
05/09/2026 00:09:45 +00:00 -> -0.01808935931060418   
05/09/2026 00:09:46 +00:00 -> 0.004831915991278635   
05/09/2026 00:09:47 +00:00 -> -0.012214258421643365  
05/09/2026 00:09:48 +00:00 -> 0.00041344973115542416 
05/09/2026 00:09:49 +00:00 -> 0.004869397447002655   
05/09/2026 00:09:50 +00:00 -> 0.0012060154203155093  
05/09/2026 00:09:51 +00:00 -> 0.002462077880394986   
05/09/2026 00:09:52 +00:00 -> -0.011410059805481154  
05/09/2026 00:09:53 +00:00 -> 0.011874459416176819   
05/09/2026 00:09:54 +00:00 -> -0.01658562511628503   
05/09/2026 00:09:55 +00:00 -> -0.01108963646292338   
05/09/2026 00:09:56 +00:00 -> 0.012770222486661709   
05/09/2026 00:09:57 +00:00 -> -0.0031490159378222415 
05/09/2026 00:09:58 +00:00 -> 0.018753117125001495   
05/09/2026 00:09:59 +00:00 -> -0.01792450503330656

Sampling and resampling

Lookup

Series.lookupAll retrieves values for many keys at once with flexible matching:

// Generate a bit less than 24 hours of data with 13.7sec offsets
let mf = stock1 (TimeSpan.FromSeconds(13.7)) 6300 |> series
// Generate keys for all minutes in 24 hours
let keys = [ for m in 0.0 .. 24.0*60.0-1.0 -> today.AddMinutes(m) ]
val mf: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.98564528786568  
05/09/2026 00:00:13 +00:00 -> 19.976650318134233 
05/09/2026 00:00:27 +00:00 -> 20.067679748788635 
05/09/2026 00:00:41 +00:00 -> 20.059828514271526 
05/09/2026 00:00:54 +00:00 -> 20.023001311742895 
05/09/2026 00:01:08 +00:00 -> 20.085648215743294 
05/09/2026 00:01:22 +00:00 -> 20.014323284965258 
05/09/2026 00:01:35 +00:00 -> 20.065000923520888 
05/09/2026 00:01:49 +00:00 -> 20.012904413701197 
05/09/2026 00:02:03 +00:00 -> 20.03471801785147  
05/09/2026 00:02:17 +00:00 -> 20.082256072649454 
05/09/2026 00:02:30 +00:00 -> 20.103836341839862 
05/09/2026 00:02:44 +00:00 -> 20.184163084090407 
05/09/2026 00:02:58 +00:00 -> 20.258943357028233 
05/09/2026 00:03:11 +00:00 -> 20.225282426114138 
...                        -> ...                
05/09/2026 23:55:04 +00:00 -> 20.317409339147435 
05/09/2026 23:55:18 +00:00 -> 20.26394378374931  
05/09/2026 23:55:31 +00:00 -> 20.24478140858524  
05/09/2026 23:55:45 +00:00 -> 20.22866721746447  
05/09/2026 23:55:59 +00:00 -> 20.170339937457236 
05/09/2026 23:56:13 +00:00 -> 20.222365615892304 
05/09/2026 23:56:26 +00:00 -> 20.21378296520048  
05/09/2026 23:56:40 +00:00 -> 20.30626117976601  
05/09/2026 23:56:54 +00:00 -> 20.29395811597593  
05/09/2026 23:57:07 +00:00 -> 20.359520715709905 
05/09/2026 23:57:21 +00:00 -> 20.407002731611453 
05/09/2026 23:57:35 +00:00 -> 20.385331709925698 
05/09/2026 23:57:48 +00:00 -> 20.422720254253925 
05/09/2026 23:58:02 +00:00 -> 20.520811618985814 
05/09/2026 23:58:16 +00:00 -> 20.471816821625165 

val keys: DateTimeOffset list =
  [05/09/2026 00:00:00 +00:00; 05/09/2026 00:01:00 +00:00;
   05/09/2026 00:02:00 +00:00; 05/09/2026 00:03:00 +00:00;
   05/09/2026 00:04:00 +00:00; 05/09/2026 00:05:00 +00:00;
   05/09/2026 00:06:00 +00:00; 05/09/2026 00:07:00 +00:00;
   05/09/2026 00:08:00 +00:00; 05/09/2026 00:09:00 +00:00;
   05/09/2026 00:10:00 +00:00; 05/09/2026 00:11:00 +00:00;
   05/09/2026 00:12:00 +00:00; 05/09/2026 00:13:00 +00:00;
   05/09/2026 00:14:00 +00:00; 05/09/2026 00:15:00 +00:00;
   05/09/2026 00:16:00 +00:00; 05/09/2026 00:17:00 +00:00;
   05/09/2026 00:18:00 +00:00; 05/09/2026 00:19:00 +00:00;
   05/09/2026 00:20:00 +00:00; 05/09/2026 00:21:00 +00:00;
   05/09/2026 00:22:00 +00:00; 05/09/2026 00:23:00 +00:00;
   05/09/2026 00:24:00 +00:00; 05/09/2026 00:25:00 +00:00;
   05/09/2026 00:26:00 +00:00; 05/09/2026 00:27:00 +00:00;
   05/09/2026 00:28:00 +00:00; 05/09/2026 00:29:00 +00:00;
   05/09/2026 00:30:00 +00:00; 05/09/2026 00:31:00 +00:00;
   05/09/2026 00:32:00 +00:00; 05/09/2026 00:33:00 +00:00;
   05/09/2026 00:34:00 +00:00; 05/09/2026 00:35:00 +00:00;
   05/09/2026 00:36:00 +00:00; 05/09/2026 00:37:00 +00:00;
   05/09/2026 00:38:00 +00:00; 05/09/2026 00:39:00 +00:00;
   05/09/2026 00:40:00 +00:00; 05/09/2026 00:41:00 +00:00;
   05/09/2026 00:42:00 +00:00; 05/09/2026 00:43:00 +00:00;
   05/09/2026 00:44:00 +00:00; 05/09/2026 00:45:00 +00:00;
   05/09/2026 00:46:00 +00:00; 05/09/2026 00:47:00 +00:00;
   05/09/2026 00:48:00 +00:00; 05/09/2026 00:49:00 +00:00;
   05/09/2026 00:50:00 +00:00; 05/09/2026 00:51:00 +00:00;
   05/09/2026 00:52:00 +00:00; 05/09/2026 00:53:00 +00:00;
   05/09/2026 00:54:00 +00:00; 05/09/2026 00:55:00 +00:00;
   05/09/2026 00:56:00 +00:00; 05/09/2026 00:57:00 +00:00;
   05/09/2026 00:58:00 +00:00; 05/09/2026 00:59:00 +00:00;
   05/09/2026 01:00:00 +00:00; 05/09/2026 01:01:00 +00:00;
   05/09/2026 01:02:00 +00:00; 05/09/2026 01:03:00 +00:00;
   05/09/2026 01:04:00 +00:00; 05/09/2026 01:05:00 +00:00;
   05/09/2026 01:06:00 +00:00; 05/09/2026 01:07:00 +00:00;
   05/09/2026 01:08:00 +00:00; 05/09/2026 01:09:00 +00:00;
   05/09/2026 01:10:00 +00:00; 05/09/2026 01:11:00 +00:00;
   05/09/2026 01:12:00 +00:00; 05/09/2026 01:13:00 +00:00;
   05/09/2026 01:14:00 +00:00; 05/09/2026 01:15:00 +00:00;
   05/09/2026 01:16:00 +00:00; 05/09/2026 01:17:00 +00:00;
   05/09/2026 01:18:00 +00:00; 05/09/2026 01:19:00 +00:00;
   05/09/2026 01:20:00 +00:00; 05/09/2026 01:21:00 +00:00;
   05/09/2026 01:22:00 +00:00; 05/09/2026 01:23:00 +00:00;
   05/09/2026 01:24:00 +00:00; 05/09/2026 01:25:00 +00:00;
   05/09/2026 01:26:00 +00:00; 05/09/2026 01:27:00 +00:00;
   05/09/2026 01:28:00 +00:00; 05/09/2026 01:29:00 +00:00;
   05/09/2026 01:30:00 +00:00; 05/09/2026 01:31:00 +00:00;
   05/09/2026 01:32:00 +00:00; 05/09/2026 01:33:00 +00:00;
   05/09/2026 01:34:00 +00:00; 05/09/2026 01:35:00 +00:00;
   05/09/2026 01:36:00 +00:00; 05/09/2026 01:37:00 +00:00;
   05/09/2026 01:38:00 +00:00; 05/09/2026 01:39:00 +00:00; ...]
// Find value for a given key, or nearest greater key with value
mf |> Series.lookupAll keys Lookup.ExactOrGreater
val it: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.98564528786568  
05/09/2026 00:01:00 +00:00 -> 20.085648215743294 
05/09/2026 00:02:00 +00:00 -> 20.03471801785147  
05/09/2026 00:03:00 +00:00 -> 20.225282426114138 
05/09/2026 00:04:00 +00:00 -> 20.21475198885141  
05/09/2026 00:05:00 +00:00 -> 20.209706817787808 
05/09/2026 00:06:00 +00:00 -> 20.208630861687492 
05/09/2026 00:07:00 +00:00 -> 20.112086263831518 
05/09/2026 00:08:00 +00:00 -> 20.144808906492866 
05/09/2026 00:09:00 +00:00 -> 20.097277504627783 
05/09/2026 00:10:00 +00:00 -> 20.160582927648843 
05/09/2026 00:11:00 +00:00 -> 20.165159777523122 
05/09/2026 00:12:00 +00:00 -> 20.220511686589532 
05/09/2026 00:13:00 +00:00 -> 20.0907073279108   
05/09/2026 00:14:00 +00:00 -> 20.19949449757874  
...                        -> ...                
05/09/2026 23:45:00 +00:00 -> 20.32078854912474  
05/09/2026 23:46:00 +00:00 -> 20.15886877591529  
05/09/2026 23:47:00 +00:00 -> 20.146829374217738 
05/09/2026 23:48:00 +00:00 -> 20.131902031044532 
05/09/2026 23:49:00 +00:00 -> 20.28215509897709  
05/09/2026 23:50:00 +00:00 -> 20.27999261722754  
05/09/2026 23:51:00 +00:00 -> 20.140280963639285 
05/09/2026 23:52:00 +00:00 -> 20.10510785745363  
05/09/2026 23:53:00 +00:00 -> 20.135779449328425 
05/09/2026 23:54:00 +00:00 -> 20.17130497980362  
05/09/2026 23:55:00 +00:00 -> 20.317409339147435 
05/09/2026 23:56:00 +00:00 -> 20.222365615892304 
05/09/2026 23:57:00 +00:00 -> 20.359520715709905 
05/09/2026 23:58:00 +00:00 -> 20.520811618985814 
05/09/2026 23:59:00 +00:00 -> <missing>
// Find value for nearest smaller key
mf |> Series.lookupAll keys Lookup.ExactOrSmaller
val it: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.98564528786568  
05/09/2026 00:01:00 +00:00 -> 20.023001311742895 
05/09/2026 00:02:00 +00:00 -> 20.012904413701197 
05/09/2026 00:03:00 +00:00 -> 20.258943357028233 
05/09/2026 00:04:00 +00:00 -> 20.265772657152613 
05/09/2026 00:05:00 +00:00 -> 20.227444137149117 
05/09/2026 00:06:00 +00:00 -> 20.23252991260147  
05/09/2026 00:07:00 +00:00 -> 20.093619257895458 
05/09/2026 00:08:00 +00:00 -> 20.18907669023861  
05/09/2026 00:09:00 +00:00 -> 20.071276441670705 
05/09/2026 00:10:00 +00:00 -> 20.19197226536892  
05/09/2026 00:11:00 +00:00 -> 20.192688292086487 
05/09/2026 00:12:00 +00:00 -> 20.141036527063264 
05/09/2026 00:13:00 +00:00 -> 20.11318815044474  
05/09/2026 00:14:00 +00:00 -> 20.129900900272776 
...                        -> ...                
05/09/2026 23:45:00 +00:00 -> 20.334928293450396 
05/09/2026 23:46:00 +00:00 -> 20.164359155523385 
05/09/2026 23:47:00 +00:00 -> 20.225270007381447 
05/09/2026 23:48:00 +00:00 -> 20.114904202586597 
05/09/2026 23:49:00 +00:00 -> 20.287384387089595 
05/09/2026 23:50:00 +00:00 -> 20.25925165732347  
05/09/2026 23:51:00 +00:00 -> 20.135516558691215 
05/09/2026 23:52:00 +00:00 -> 20.098346771096097 
05/09/2026 23:53:00 +00:00 -> 20.119900080071286 
05/09/2026 23:54:00 +00:00 -> 20.151293120558336 
05/09/2026 23:55:00 +00:00 -> 20.311684129310493 
05/09/2026 23:56:00 +00:00 -> 20.170339937457236 
05/09/2026 23:57:00 +00:00 -> 20.29395811597593  
05/09/2026 23:58:00 +00:00 -> 20.422720254253925 
05/09/2026 23:59:00 +00:00 -> 20.471816821625165

Resampling

Resample by collecting values between specified keys:

// For each key, collect values for greater keys until the next one
mf |> Series.resample keys Direction.Forward
val it: Series<DateTimeOffset,Series<DateTimeOffset,float>> =
  
05/09/2026 00:00:00 +00:00 -> series [ 05/09/2026 00:00:00 +00:00 => 19.98564... 
05/09/2026 00:01:00 +00:00 -> series [ 05/09/2026 00:01:08 +00:00 => 20.08564... 
05/09/2026 00:02:00 +00:00 -> series [ 05/09/2026 00:02:03 +00:00 => 20.03471... 
05/09/2026 00:03:00 +00:00 -> series [ 05/09/2026 00:03:11 +00:00 => 20.22528... 
05/09/2026 00:04:00 +00:00 -> series [ 05/09/2026 00:04:06 +00:00 => 20.21475... 
05/09/2026 00:05:00 +00:00 -> series [ 05/09/2026 00:05:01 +00:00 => 20.20970... 
05/09/2026 00:06:00 +00:00 -> series [ 05/09/2026 00:06:09 +00:00 => 20.20863... 
05/09/2026 00:07:00 +00:00 -> series [ 05/09/2026 00:07:04 +00:00 => 20.11208... 
05/09/2026 00:08:00 +00:00 -> series [ 05/09/2026 00:08:13 +00:00 => 20.14480... 
05/09/2026 00:09:00 +00:00 -> series [ 05/09/2026 00:09:08 +00:00 => 20.09727... 
05/09/2026 00:10:00 +00:00 -> series [ 05/09/2026 00:10:02 +00:00 => 20.16058... 
05/09/2026 00:11:00 +00:00 -> series [ 05/09/2026 00:11:11 +00:00 => 20.16515... 
05/09/2026 00:12:00 +00:00 -> series [ 05/09/2026 00:12:06 +00:00 => 20.22051... 
05/09/2026 00:13:00 +00:00 -> series [ 05/09/2026 00:13:00 +00:00 => 20.09070... 
05/09/2026 00:14:00 +00:00 -> series [ 05/09/2026 00:14:09 +00:00 => 20.19949... 
...                        -> ...                                                
05/09/2026 23:45:00 +00:00 -> series [ 05/09/2026 23:45:01 +00:00 => 20.32078... 
05/09/2026 23:46:00 +00:00 -> series [ 05/09/2026 23:46:10 +00:00 => 20.15886... 
05/09/2026 23:47:00 +00:00 -> series [ 05/09/2026 23:47:05 +00:00 => 20.14682... 
05/09/2026 23:48:00 +00:00 -> series [ 05/09/2026 23:48:13 +00:00 => 20.13190... 
05/09/2026 23:49:00 +00:00 -> series [ 05/09/2026 23:49:08 +00:00 => 20.28215... 
05/09/2026 23:50:00 +00:00 -> series [ 05/09/2026 23:50:03 +00:00 => 20.27999... 
05/09/2026 23:51:00 +00:00 -> series [ 05/09/2026 23:51:11 +00:00 => 20.14028... 
05/09/2026 23:52:00 +00:00 -> series [ 05/09/2026 23:52:06 +00:00 => 20.10510... 
05/09/2026 23:53:00 +00:00 -> series [ 05/09/2026 23:53:01 +00:00 => 20.13577... 
05/09/2026 23:54:00 +00:00 -> series [ 05/09/2026 23:54:09 +00:00 => 20.17130... 
05/09/2026 23:55:00 +00:00 -> series [ 05/09/2026 23:55:04 +00:00 => 20.31740... 
05/09/2026 23:56:00 +00:00 -> series [ 05/09/2026 23:56:13 +00:00 => 20.22236... 
05/09/2026 23:57:00 +00:00 -> series [ 05/09/2026 23:57:07 +00:00 => 20.35952... 
05/09/2026 23:58:00 +00:00 -> series [ 05/09/2026 23:58:02 +00:00 => 20.52081... 
05/09/2026 23:59:00 +00:00 -> series [ ]
// Aggregate each chunk of preceding values using mean
mf |> Series.resampleInto keys Direction.Backward 
  (fun k s -> Stats.mean s)
val it: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.98564528786568  
05/09/2026 00:01:00 +00:00 -> 20.031789973234325 
05/09/2026 00:02:00 +00:00 -> 20.044469209482656 
05/09/2026 00:03:00 +00:00 -> 20.132783374691886 
05/09/2026 00:04:00 +00:00 -> 20.268534687196485 
05/09/2026 00:05:00 +00:00 -> 20.215439362239422 
05/09/2026 00:06:00 +00:00 -> 20.21332933592882  
05/09/2026 00:07:00 +00:00 -> 20.16895022011858  
05/09/2026 00:08:00 +00:00 -> 20.16006277338978  
05/09/2026 00:09:00 +00:00 -> 20.12252983160636  
05/09/2026 00:10:00 +00:00 -> 20.148232646833637 
05/09/2026 00:11:00 +00:00 -> 20.174057181464466 
05/09/2026 00:12:00 +00:00 -> 20.169753367272047 
05/09/2026 00:13:00 +00:00 -> 20.177284222982284 
05/09/2026 00:14:00 +00:00 -> 20.132530185137433 
...                        -> ...                
05/09/2026 23:45:00 +00:00 -> 20.347790342795214 
05/09/2026 23:46:00 +00:00 -> 20.231478050784688 
05/09/2026 23:47:00 +00:00 -> 20.19456815642126  
05/09/2026 23:48:00 +00:00 -> 20.1591225131781   
05/09/2026 23:49:00 +00:00 -> 20.18048315783345  
05/09/2026 23:50:00 +00:00 -> 20.295593243777184 
05/09/2026 23:51:00 +00:00 -> 20.176964990881594 
05/09/2026 23:52:00 +00:00 -> 20.11659018015395  
05/09/2026 23:53:00 +00:00 -> 20.103752607457363 
05/09/2026 23:54:00 +00:00 -> 20.13999877351398  
05/09/2026 23:55:00 +00:00 -> 20.241804686427074 
05/09/2026 23:56:00 +00:00 -> 20.24502833728074  
05/09/2026 23:57:00 +00:00 -> 20.25909196920868  
05/09/2026 23:58:00 +00:00 -> 20.393643852875243 
05/09/2026 23:59:00 +00:00 -> 20.49631422030549

Resample by projecting existing keys (e.g. group by date):

// Generate 2.5 months of data in 1.7 hour offsets
let ds = stock1 (TimeSpan.FromHours(1.7)) 1000 |> series
val ds: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.675404283433707 
05/09/2026 01:42:00 +00:00 -> 19.465953557185557 
05/09/2026 03:24:00 +00:00 -> 21.40385886213388  
05/09/2026 05:06:00 +00:00 -> 21.20236780744744  
05/09/2026 06:48:00 +00:00 -> 20.370489644073643 
05/09/2026 08:30:00 +00:00 -> 21.735012156905242 
05/09/2026 10:12:00 +00:00 -> 20.136813727232624 
05/09/2026 11:54:00 +00:00 -> 21.217180157875    
05/09/2026 13:36:00 +00:00 -> 20.058958976538722 
05/09/2026 15:18:00 +00:00 -> 20.501806792583377 
05/09/2026 17:00:00 +00:00 -> 21.529331031416294 
05/09/2026 18:42:00 +00:00 -> 21.99749520939284  
05/09/2026 20:24:00 +00:00 -> 23.90343230485794  
05/09/2026 22:06:00 +00:00 -> 25.81601614828132  
05/09/2026 23:48:00 +00:00 -> 24.89484107561622  
...                        -> ...                
07/17/2026 18:30:00 +00:00 -> 3.641530457272595  
07/17/2026 20:12:00 +00:00 -> 3.8917833903403025 
07/17/2026 21:54:00 +00:00 -> 3.97654708108731   
07/17/2026 23:36:00 +00:00 -> 3.85169092816856   
07/18/2026 01:18:00 +00:00 -> 3.722356929579045  
07/18/2026 03:00:00 +00:00 -> 3.5985684265763895 
07/18/2026 04:42:00 +00:00 -> 3.6407303637780433 
07/18/2026 06:24:00 +00:00 -> 3.8519325661460124 
07/18/2026 08:06:00 +00:00 -> 3.5604275340570846 
07/18/2026 09:48:00 +00:00 -> 3.775406122831678  
07/18/2026 11:30:00 +00:00 -> 3.7663152368678405 
07/18/2026 13:12:00 +00:00 -> 3.9752657144885246 
07/18/2026 14:54:00 +00:00 -> 3.973455066726479  
07/18/2026 16:36:00 +00:00 -> 4.272491398316074  
07/18/2026 18:18:00 +00:00 -> 4.357527721518215
// Sample by day
ds |> Series.resampleEquiv (fun d -> d.Date)
val it: Series<DateTime,Series<DateTimeOffset,float>> =
  
05/09/2026 -> series [ 05/09/2026 00:00:00 +00:00 => 19.67540... 
05/10/2026 -> series [ 05/10/2026 01:30:00 +00:00 => 27.50798... 
05/11/2026 -> series [ 05/11/2026 01:18:00 +00:00 => 23.06359... 
05/12/2026 -> series [ 05/12/2026 01:06:00 +00:00 => 23.22849... 
05/13/2026 -> series [ 05/13/2026 00:54:00 +00:00 => 20.54233... 
05/14/2026 -> series [ 05/14/2026 00:42:00 +00:00 => 19.37080... 
05/15/2026 -> series [ 05/15/2026 00:30:00 +00:00 => 18.59819... 
05/16/2026 -> series [ 05/16/2026 00:18:00 +00:00 => 15.33970... 
05/17/2026 -> series [ 05/17/2026 00:06:00 +00:00 => 18.06696... 
05/18/2026 -> series [ 05/18/2026 01:36:00 +00:00 => 15.18133... 
05/19/2026 -> series [ 05/19/2026 01:24:00 +00:00 => 12.65494... 
05/20/2026 -> series [ 05/20/2026 01:12:00 +00:00 => 11.35456... 
05/21/2026 -> series [ 05/21/2026 01:00:00 +00:00 => 10.59699... 
05/22/2026 -> series [ 05/22/2026 00:48:00 +00:00 => 10.50841... 
05/23/2026 -> series [ 05/23/2026 00:36:00 +00:00 => 7.848639... 
...        -> ...                                                
07/04/2026 -> series [ 07/04/2026 00:42:00 +00:00 => 5.169623... 
07/05/2026 -> series [ 07/05/2026 00:30:00 +00:00 => 5.535700... 
07/06/2026 -> series [ 07/06/2026 00:18:00 +00:00 => 5.950062... 
07/07/2026 -> series [ 07/07/2026 00:06:00 +00:00 => 7.799024... 
07/08/2026 -> series [ 07/08/2026 01:36:00 +00:00 => 6.763316... 
07/09/2026 -> series [ 07/09/2026 01:24:00 +00:00 => 5.892557... 
07/10/2026 -> series [ 07/10/2026 01:12:00 +00:00 => 4.925169... 
07/11/2026 -> series [ 07/11/2026 01:00:00 +00:00 => 5.391912... 
07/12/2026 -> series [ 07/12/2026 00:48:00 +00:00 => 5.143872... 
07/13/2026 -> series [ 07/13/2026 00:36:00 +00:00 => 4.581436... 
07/14/2026 -> series [ 07/14/2026 00:24:00 +00:00 => 4.711421... 
07/15/2026 -> series [ 07/15/2026 00:12:00 +00:00 => 3.970841... 
07/16/2026 -> series [ 07/16/2026 00:00:00 +00:00 => 5.426565... 
07/17/2026 -> series [ 07/17/2026 01:30:00 +00:00 => 4.762776... 
07/18/2026 -> series [ 07/18/2026 01:18:00 +00:00 => 3.722356...
ds.ResampleEquivalence(fun d -> d.Date)
val it: Series<DateTime,Series<DateTimeOffset,float>> =
  
05/09/2026 -> series [ 05/09/2026 00:00:00 +00:00 => 19.67540... 
05/10/2026 -> series [ 05/10/2026 01:30:00 +00:00 => 27.50798... 
05/11/2026 -> series [ 05/11/2026 01:18:00 +00:00 => 23.06359... 
05/12/2026 -> series [ 05/12/2026 01:06:00 +00:00 => 23.22849... 
05/13/2026 -> series [ 05/13/2026 00:54:00 +00:00 => 20.54233... 
05/14/2026 -> series [ 05/14/2026 00:42:00 +00:00 => 19.37080... 
05/15/2026 -> series [ 05/15/2026 00:30:00 +00:00 => 18.59819... 
05/16/2026 -> series [ 05/16/2026 00:18:00 +00:00 => 15.33970... 
05/17/2026 -> series [ 05/17/2026 00:06:00 +00:00 => 18.06696... 
05/18/2026 -> series [ 05/18/2026 01:36:00 +00:00 => 15.18133... 
05/19/2026 -> series [ 05/19/2026 01:24:00 +00:00 => 12.65494... 
05/20/2026 -> series [ 05/20/2026 01:12:00 +00:00 => 11.35456... 
05/21/2026 -> series [ 05/21/2026 01:00:00 +00:00 => 10.59699... 
05/22/2026 -> series [ 05/22/2026 00:48:00 +00:00 => 10.50841... 
05/23/2026 -> series [ 05/23/2026 00:36:00 +00:00 => 7.848639... 
...        -> ...                                                
07/04/2026 -> series [ 07/04/2026 00:42:00 +00:00 => 5.169623... 
07/05/2026 -> series [ 07/05/2026 00:30:00 +00:00 => 5.535700... 
07/06/2026 -> series [ 07/06/2026 00:18:00 +00:00 => 5.950062... 
07/07/2026 -> series [ 07/07/2026 00:06:00 +00:00 => 7.799024... 
07/08/2026 -> series [ 07/08/2026 01:36:00 +00:00 => 6.763316... 
07/09/2026 -> series [ 07/09/2026 01:24:00 +00:00 => 5.892557... 
07/10/2026 -> series [ 07/10/2026 01:12:00 +00:00 => 4.925169... 
07/11/2026 -> series [ 07/11/2026 01:00:00 +00:00 => 5.391912... 
07/12/2026 -> series [ 07/12/2026 00:48:00 +00:00 => 5.143872... 
07/13/2026 -> series [ 07/13/2026 00:36:00 +00:00 => 4.581436... 
07/14/2026 -> series [ 07/14/2026 00:24:00 +00:00 => 4.711421... 
07/15/2026 -> series [ 07/15/2026 00:12:00 +00:00 => 3.970841... 
07/16/2026 -> series [ 07/16/2026 00:00:00 +00:00 => 5.426565... 
07/17/2026 -> series [ 07/17/2026 01:30:00 +00:00 => 4.762776... 
07/18/2026 -> series [ 07/18/2026 01:18:00 +00:00 => 3.722356...

Uniform resampling

Assign values to every key in a range, filling gaps for days with no observations:

// Create input data with non-uniformly distributed keys
let days =
  [ "10/3/2013 12:00:00"; "10/4/2013 15:00:00" 
    "10/4/2013 18:00:00"; "10/4/2013 19:00:00"
    "10/6/2013 15:00:00"; "10/6/2013 21:00:00" ]
let nu = 
  stock1 (TimeSpan(24,0,0)) 10 |> series
  |> Series.indexWith days |> Series.mapKeys DateTimeOffset.Parse
val days: string list =
  ["10/3/2013 12:00:00"; "10/4/2013 15:00:00"; "10/4/2013 18:00:00";
   "10/4/2013 19:00:00"; "10/6/2013 15:00:00"; "10/6/2013 21:00:00"]
val nu: Series<DateTimeOffset,float> =
  
10/03/2013 12:00:00 +00:00 -> 18.566061075531533 
10/04/2013 15:00:00 +00:00 -> 17.605421226436587 
10/04/2013 18:00:00 +00:00 -> 24.82569356458878  
10/04/2013 19:00:00 +00:00 -> 23.65146290106875  
10/06/2013 15:00:00 +00:00 -> 20.087929661261004 
10/06/2013 21:00:00 +00:00 -> 25.30036674490578
// Generate uniform resampling based on dates, fill missing with nearest smaller
let sampled =
  nu |> Series.resampleUniform Lookup.ExactOrSmaller 
    (fun dt -> dt.Date) (fun dt -> dt.AddDays(1.0))
val sampled: Series<DateTime,Series<DateTimeOffset,float>> =
  
10/03/2013 -> series [ 10/03/2013 12:00:00 +00:00 => 18.56606... 
10/04/2013 -> series [ 10/04/2013 15:00:00 +00:00 => 17.60542... 
10/05/2013 -> series [ 10/04/2013 19:00:00 +00:00 => 23.65146... 
10/06/2013 -> series [ 10/06/2013 15:00:00 +00:00 => 20.08792...
// Turn into frame with multiple columns for each day
sampled 
|> Series.mapValues Series.indexOrdinally
|> Frame.ofRows

Sampling at fixed intervals

// Generate 1k observations with 1.7 hour offsets
let pr = stock1 (TimeSpan.FromHours(1.7)) 1000 |> series
val pr: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.675404283433707 
05/09/2026 01:42:00 +00:00 -> 19.465953557185557 
05/09/2026 03:24:00 +00:00 -> 21.40385886213388  
05/09/2026 05:06:00 +00:00 -> 21.20236780744744  
05/09/2026 06:48:00 +00:00 -> 20.370489644073643 
05/09/2026 08:30:00 +00:00 -> 21.735012156905242 
05/09/2026 10:12:00 +00:00 -> 20.136813727232624 
05/09/2026 11:54:00 +00:00 -> 21.217180157875    
05/09/2026 13:36:00 +00:00 -> 20.058958976538722 
05/09/2026 15:18:00 +00:00 -> 20.501806792583377 
05/09/2026 17:00:00 +00:00 -> 21.529331031416294 
05/09/2026 18:42:00 +00:00 -> 21.99749520939284  
05/09/2026 20:24:00 +00:00 -> 23.90343230485794  
05/09/2026 22:06:00 +00:00 -> 25.81601614828132  
05/09/2026 23:48:00 +00:00 -> 24.89484107561622  
...                        -> ...                
07/17/2026 18:30:00 +00:00 -> 3.641530457272595  
07/17/2026 20:12:00 +00:00 -> 3.8917833903403025 
07/17/2026 21:54:00 +00:00 -> 3.97654708108731   
07/17/2026 23:36:00 +00:00 -> 3.85169092816856   
07/18/2026 01:18:00 +00:00 -> 3.722356929579045  
07/18/2026 03:00:00 +00:00 -> 3.5985684265763895 
07/18/2026 04:42:00 +00:00 -> 3.6407303637780433 
07/18/2026 06:24:00 +00:00 -> 3.8519325661460124 
07/18/2026 08:06:00 +00:00 -> 3.5604275340570846 
07/18/2026 09:48:00 +00:00 -> 3.775406122831678  
07/18/2026 11:30:00 +00:00 -> 3.7663152368678405 
07/18/2026 13:12:00 +00:00 -> 3.9752657144885246 
07/18/2026 14:54:00 +00:00 -> 3.973455066726479  
07/18/2026 16:36:00 +00:00 -> 4.272491398316074  
07/18/2026 18:18:00 +00:00 -> 4.357527721518215
// Sample at 2 hour intervals; 'Backward' specifies that we collect all previous values
pr |> Series.sampleTime (TimeSpan(2, 0, 0)) Direction.Backward
val it: Series<DateTimeOffset,Series<DateTimeOffset,float>> =
  
05/09/2026 00:00:00 +00:00 -> series [ 05/09/2026 00:00:00 +00:00 => 19.67540... 
05/09/2026 02:00:00 +00:00 -> series [ 05/09/2026 01:42:00 +00:00 => 19.46595... 
05/09/2026 04:00:00 +00:00 -> series [ 05/09/2026 03:24:00 +00:00 => 21.40385... 
05/09/2026 06:00:00 +00:00 -> series [ 05/09/2026 05:06:00 +00:00 => 21.20236... 
05/09/2026 08:00:00 +00:00 -> series [ 05/09/2026 06:48:00 +00:00 => 20.37048... 
05/09/2026 10:00:00 +00:00 -> series [ 05/09/2026 08:30:00 +00:00 => 21.73501... 
05/09/2026 12:00:00 +00:00 -> series [ 05/09/2026 10:12:00 +00:00 => 20.13681... 
05/09/2026 14:00:00 +00:00 -> series [ 05/09/2026 13:36:00 +00:00 => 20.05895... 
05/09/2026 16:00:00 +00:00 -> series [ 05/09/2026 15:18:00 +00:00 => 20.50180... 
05/09/2026 18:00:00 +00:00 -> series [ 05/09/2026 17:00:00 +00:00 => 21.52933... 
05/09/2026 20:00:00 +00:00 -> series [ 05/09/2026 18:42:00 +00:00 => 21.99749... 
05/09/2026 22:00:00 +00:00 -> series [ 05/09/2026 20:24:00 +00:00 => 23.90343... 
05/10/2026 00:00:00 +00:00 -> series [ 05/09/2026 22:06:00 +00:00 => 25.81601... 
05/10/2026 02:00:00 +00:00 -> series [ 05/10/2026 01:30:00 +00:00 => 27.50798... 
05/10/2026 04:00:00 +00:00 -> series [ 05/10/2026 03:12:00 +00:00 => 25.77721... 
...                        -> ...                                                
07/17/2026 16:00:00 +00:00 -> series [ 07/17/2026 15:06:00 +00:00 => 4.075069... 
07/17/2026 18:00:00 +00:00 -> series [ 07/17/2026 16:48:00 +00:00 => 3.746247... 
07/17/2026 20:00:00 +00:00 -> series [ 07/17/2026 18:30:00 +00:00 => 3.641530... 
07/17/2026 22:00:00 +00:00 -> series [ 07/17/2026 20:12:00 +00:00 => 3.891783... 
07/18/2026 00:00:00 +00:00 -> series [ 07/17/2026 23:36:00 +00:00 => 3.851690... 
07/18/2026 02:00:00 +00:00 -> series [ 07/18/2026 01:18:00 +00:00 => 3.722356... 
07/18/2026 04:00:00 +00:00 -> series [ 07/18/2026 03:00:00 +00:00 => 3.598568... 
07/18/2026 06:00:00 +00:00 -> series [ 07/18/2026 04:42:00 +00:00 => 3.640730... 
07/18/2026 08:00:00 +00:00 -> series [ 07/18/2026 06:24:00 +00:00 => 3.851932... 
07/18/2026 10:00:00 +00:00 -> series [ 07/18/2026 08:06:00 +00:00 => 3.560427... 
07/18/2026 12:00:00 +00:00 -> series [ 07/18/2026 11:30:00 +00:00 => 3.766315... 
07/18/2026 14:00:00 +00:00 -> series [ 07/18/2026 13:12:00 +00:00 => 3.975265... 
07/18/2026 16:00:00 +00:00 -> series [ 07/18/2026 14:54:00 +00:00 => 3.973455... 
07/18/2026 18:00:00 +00:00 -> series [ 07/18/2026 16:36:00 +00:00 => 4.272491... 
07/18/2026 20:00:00 +00:00 -> series [ 07/18/2026 18:18:00 +00:00 => 4.357527...
// Get the most recent value, sampled at 2 hour intervals
pr |> Series.sampleTimeInto
  (TimeSpan(2, 0, 0)) Direction.Backward Series.lastValue
val it: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.675404283433707 
05/09/2026 02:00:00 +00:00 -> 19.465953557185557 
05/09/2026 04:00:00 +00:00 -> 21.40385886213388  
05/09/2026 06:00:00 +00:00 -> 21.20236780744744  
05/09/2026 08:00:00 +00:00 -> 20.370489644073643 
05/09/2026 10:00:00 +00:00 -> 21.735012156905242 
05/09/2026 12:00:00 +00:00 -> 21.217180157875    
05/09/2026 14:00:00 +00:00 -> 20.058958976538722 
05/09/2026 16:00:00 +00:00 -> 20.501806792583377 
05/09/2026 18:00:00 +00:00 -> 21.529331031416294 
05/09/2026 20:00:00 +00:00 -> 21.99749520939284  
05/09/2026 22:00:00 +00:00 -> 23.90343230485794  
05/10/2026 00:00:00 +00:00 -> 24.89484107561622  
05/10/2026 02:00:00 +00:00 -> 27.507980336130817 
05/10/2026 04:00:00 +00:00 -> 25.77721873780822  
...                        -> ...                
07/17/2026 16:00:00 +00:00 -> 4.075069343422606  
07/17/2026 18:00:00 +00:00 -> 3.7462474852298686 
07/17/2026 20:00:00 +00:00 -> 3.641530457272595  
07/17/2026 22:00:00 +00:00 -> 3.97654708108731   
07/18/2026 00:00:00 +00:00 -> 3.85169092816856   
07/18/2026 02:00:00 +00:00 -> 3.722356929579045  
07/18/2026 04:00:00 +00:00 -> 3.5985684265763895 
07/18/2026 06:00:00 +00:00 -> 3.6407303637780433 
07/18/2026 08:00:00 +00:00 -> 3.8519325661460124 
07/18/2026 10:00:00 +00:00 -> 3.775406122831678  
07/18/2026 12:00:00 +00:00 -> 3.7663152368678405 
07/18/2026 14:00:00 +00:00 -> 3.9752657144885246 
07/18/2026 16:00:00 +00:00 -> 3.973455066726479  
07/18/2026 18:00:00 +00:00 -> 4.272491398316074  
07/18/2026 20:00:00 +00:00 -> 4.357527721518215

Arithmetic and statistics

Shifting and differences

// Generate sample data with 1.7 hour offsets
let sample = stock1 (TimeSpan.FromHours(1.7)) 6 |> series
val sample: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 19.675404283433707 
05/09/2026 01:42:00 +00:00 -> 19.465953557185557 
05/09/2026 03:24:00 +00:00 -> 21.40385886213388  
05/09/2026 05:06:00 +00:00 -> 21.20236780744744  
05/09/2026 06:48:00 +00:00 -> 20.370489644073643 
05/09/2026 08:30:00 +00:00 -> 21.735012156905242
// Calculates: new[i] = s[i] - s[i-1]
let diff1 = sample |> Series.diff 1
val diff1: Series<DateTimeOffset,float> =
  
05/09/2026 01:42:00 +00:00 -> -0.20945072624814998 
05/09/2026 03:24:00 +00:00 -> 1.9379053049483232   
05/09/2026 05:06:00 +00:00 -> -0.20149105468643924 
05/09/2026 06:48:00 +00:00 -> -0.8318781633737977  
05/09/2026 08:30:00 +00:00 -> 1.3645225128315985
// Shift series values by 1
let shift1 = sample |> Series.shift 1
val shift1: Series<DateTimeOffset,float> =
  
05/09/2026 01:42:00 +00:00 -> 19.675404283433707 
05/09/2026 03:24:00 +00:00 -> 19.465953557185557 
05/09/2026 05:06:00 +00:00 -> 21.40385886213388  
05/09/2026 06:48:00 +00:00 -> 21.20236780744744  
05/09/2026 08:30:00 +00:00 -> 20.370489644073643
// Align all results in a frame to see the results
let alignedDf = 
  [ "Shift +1" => shift1 
    "Diff +1" => diff1 
    "Diff" => sample - shift1 
    "Orig" => sample ] |> Frame.ofColumns 
val alignedDf: Frame<DateTimeOffset,string> =
  
                              Shift +1           Diff +1              Diff                 Orig               
05/09/2026 00:00:00 +00:00 -> <missing>          <missing>            <missing>            19.675404283433707 
05/09/2026 01:42:00 +00:00 -> 19.675404283433707 -0.20945072624814998 -0.20945072624814998 19.465953557185557 
05/09/2026 03:24:00 +00:00 -> 19.465953557185557 1.9379053049483232   1.9379053049483232   21.40385886213388  
05/09/2026 05:06:00 +00:00 -> 21.40385886213388  -0.20149105468643924 -0.20149105468643924 21.20236780744744  
05/09/2026 06:48:00 +00:00 -> 21.20236780744744  -0.8318781633737977  -0.8318781633737977  20.370489644073643 
05/09/2026 08:30:00 +00:00 -> 20.370489644073643 1.3645225128315985   1.3645225128315985   21.735012156905242

Operators and functions

Series supports standard F# math functions and binary operators. Binary operators auto-align two series by key before applying:

// Subtract previous value from the current value
sample - sample.Shift(1)
val it: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> <missing>            
05/09/2026 01:42:00 +00:00 -> -0.20945072624814998 
05/09/2026 03:24:00 +00:00 -> 1.9379053049483232   
05/09/2026 05:06:00 +00:00 -> -0.20149105468643924 
05/09/2026 06:48:00 +00:00 -> -0.8318781633737977  
05/09/2026 08:30:00 +00:00 -> 1.3645225128315985
// Calculate logarithm of such differences
log (sample - sample.Shift(1))
val it: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> <missing>           
05/09/2026 01:42:00 +00:00 -> <missing>           
05/09/2026 03:24:00 +00:00 -> 0.6616076500190085  
05/09/2026 05:06:00 +00:00 -> <missing>           
05/09/2026 06:48:00 +00:00 -> <missing>           
05/09/2026 08:30:00 +00:00 -> 0.31080455999064704
// Calculate square of differences
sample.Diff(1) ** 2.0
val it: Series<DateTimeOffset,float> =
  
05/09/2026 01:42:00 +00:00 -> 0.04386960672587746 
05/09/2026 03:24:00 +00:00 -> 3.755476970946854   
05/09/2026 05:06:00 +00:00 -> 0.04059864511865365 
05/09/2026 06:48:00 +00:00 -> 0.6920212786981629  
05/09/2026 08:30:00 +00:00 -> 1.86192168802426
// Get absolute value of differences
abs (sample - sample.Shift(1))
val it: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> <missing>           
05/09/2026 01:42:00 +00:00 -> 0.20945072624814998 
05/09/2026 03:24:00 +00:00 -> 1.9379053049483232  
05/09/2026 05:06:00 +00:00 -> 0.20149105468643924 
05/09/2026 06:48:00 +00:00 -> 0.8318781633737977  
05/09/2026 08:30:00 +00:00 -> 1.3645225128315985
// Get absolute value of distance from the mean
abs (sample - (Stats.mean sample))
val it: Series<DateTimeOffset,float> =
  
05/09/2026 00:00:00 +00:00 -> 0.9667767684295363 
05/09/2026 01:42:00 +00:00 -> 1.1762274946776863 
05/09/2026 03:24:00 +00:00 -> 0.761677810270637  
05/09/2026 05:06:00 +00:00 -> 0.5601867555841977 
05/09/2026 06:48:00 +00:00 -> 0.2716914077896    
05/09/2026 08:30:00 +00:00 -> 1.0928311050419985
// Apply a custom function to all elements
let adjust v = min 1.0 (max -1.0 v)
adjust $ sample.Diff(1)
val adjust: v: float -> float
val it: Series<DateTimeOffset,float> =
  
05/09/2026 01:42:00 +00:00 -> -0.20945072624814998 
05/09/2026 03:24:00 +00:00 -> 1                    
05/09/2026 05:06:00 +00:00 -> -0.20149105468643924 
05/09/2026 06:48:00 +00:00 -> -0.8318781633737977  
05/09/2026 08:30:00 +00:00 -> 1

Frame-level operations

Many time-series operations apply to entire frames:

// Multiply all numeric columns by a given constant
alignedDf * 0.65
val it: Frame<DateTimeOffset,string> =
  
                              Shift +1           Diff +1             Diff                Orig               
05/09/2026 00:00:00 +00:00 -> <missing>          <missing>           <missing>           12.78901278423191  
05/09/2026 01:42:00 +00:00 -> 12.78901278423191  -0.1361429720612975 -0.1361429720612975 12.652869812170612 
05/09/2026 03:24:00 +00:00 -> 12.652869812170612 1.2596384482164102  1.2596384482164102  13.912508260387023 
05/09/2026 05:06:00 +00:00 -> 13.912508260387023 -0.1309691855461855 -0.1309691855461855 13.781539074840838 
05/09/2026 06:48:00 +00:00 -> 13.781539074840838 -0.5407208061929686 -0.5407208061929686 13.24081826864787  
05/09/2026 08:30:00 +00:00 -> 13.24081826864787  0.886939633340539   0.886939633340539   14.127757901988408
// Sum each column and divide results by a constant
Stats.sum alignedDf / 6.0
val it: Series<string,float> =
  
Shift +1 -> 17.01967902571237   
Diff +1  -> 0.34326797891192246 
Diff     -> 0.34326797891192246 
Orig     -> 20.642181051863243
// Divide sum by mean of each frame column
Stats.sum alignedDf / Stats.mean alignedDf
val it: Series<string,float> =
  
Shift +1 -> 5 
Diff +1  -> 5 
Diff     -> 5 
Orig     -> 6
namespace System
namespace Deedle
namespace MathNet
namespace MathNet.Numerics
namespace MathNet.Numerics.Distributions
val fsi: FSharp.Compiler.Interactive.InteractiveSession
member FSharp.Compiler.Interactive.InteractiveSession.AddPrinter: ('T -> string) -> unit
val o: obj
type obj = Object
val iface: Type
Object.GetType() : Type
val fmt: Reflection.MethodInfo
Type.GetMethod(name: string) : Reflection.MethodInfo
   (+0 other overloads)
Type.GetMethod(name: string, types: Type array) : Reflection.MethodInfo
   (+0 other overloads)
Type.GetMethod(name: string, bindingAttr: Reflection.BindingFlags) : Reflection.MethodInfo
   (+0 other overloads)
Type.GetMethod(name: string, types: Type array, modifiers: Reflection.ParameterModifier array) : Reflection.MethodInfo
   (+0 other overloads)
Type.GetMethod(name: string, bindingAttr: Reflection.BindingFlags, types: Type array) : Reflection.MethodInfo
   (+0 other overloads)
Type.GetMethod(name: string, genericParameterCount: int, types: Type array) : Reflection.MethodInfo
   (+0 other overloads)
Type.GetMethod(name: string, genericParameterCount: int, types: Type array, modifiers: Reflection.ParameterModifier array) : Reflection.MethodInfo
   (+0 other overloads)
Type.GetMethod(name: string, genericParameterCount: int, bindingAttr: Reflection.BindingFlags, types: Type array) : Reflection.MethodInfo
   (+0 other overloads)
Type.GetMethod(name: string, bindingAttr: Reflection.BindingFlags, binder: Reflection.Binder, types: Type array, modifiers: Reflection.ParameterModifier array) : Reflection.MethodInfo
   (+0 other overloads)
Type.GetMethod(name: string, bindingAttr: Reflection.BindingFlags, binder: Reflection.Binder, callConvention: Reflection.CallingConventions, types: Type array, modifiers: Reflection.ParameterModifier array) : Reflection.MethodInfo
   (+0 other overloads)
Reflection.MethodBase.Invoke(obj: obj, parameters: obj array) : obj
Reflection.MethodBase.Invoke(obj: obj, invokeAttr: Reflection.BindingFlags, binder: Reflection.Binder, parameters: obj array, culture: Globalization.CultureInfo) : obj
Multiple items
val string: value: 'T -> string

--------------------
type string = String
val randomPrice: seed: int -> drift: float -> volatility: float -> initial: float -> start: DateTimeOffset -> span: TimeSpan -> count: int -> (DateTimeOffset * float) seq
 Generate random prices with geometric Brownian motion
val seed: int
val drift: float
val volatility: float
val initial: float
val start: DateTimeOffset
Multiple items
type DateTimeOffset = new: date: DateOnly * time: TimeOnly * offset: TimeSpan -> unit + 8 overloads member Add: timeSpan: TimeSpan -> DateTimeOffset member AddDays: days: float -> DateTimeOffset member AddHours: hours: float -> DateTimeOffset member AddMicroseconds: microseconds: float -> DateTimeOffset member AddMilliseconds: milliseconds: float -> DateTimeOffset member AddMinutes: minutes: float -> DateTimeOffset member AddMonths: months: int -> DateTimeOffset member AddSeconds: seconds: float -> DateTimeOffset member AddTicks: ticks: int64 -> DateTimeOffset ...
<summary>Represents a point in time, typically expressed as a date and time of day, relative to Coordinated Universal Time (UTC).</summary>

--------------------
DateTimeOffset ()
DateTimeOffset(dateTime: DateTime) : DateTimeOffset
DateTimeOffset(dateTime: DateTime, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(ticks: int64, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(date: DateOnly, time: TimeOnly, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(year: int, month: int, day: int, hour: int, minute: int, second: int, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(year: int, month: int, day: int, hour: int, minute: int, second: int, millisecond: int, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(year: int, month: int, day: int, hour: int, minute: int, second: int, millisecond: int, calendar: Globalization.Calendar, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(year: int, month: int, day: int, hour: int, minute: int, second: int, millisecond: int, microsecond: int, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(year: int, month: int, day: int, hour: int, minute: int, second: int, millisecond: int, microsecond: int, calendar: Globalization.Calendar, offset: TimeSpan) : DateTimeOffset
val span: TimeSpan
Multiple items
type TimeSpan = new: hours: int * minutes: int * seconds: int -> unit + 4 overloads member Add: ts: TimeSpan -> TimeSpan member CompareTo: value: obj -> int + 1 overload member Divide: divisor: float -> TimeSpan + 1 overload member Duration: unit -> TimeSpan member Equals: value: obj -> bool + 2 overloads member GetHashCode: unit -> int member Multiply: factor: float -> TimeSpan member Negate: unit -> TimeSpan member Subtract: ts: TimeSpan -> TimeSpan ...
<summary>Represents a time interval.</summary>

--------------------
TimeSpan ()
TimeSpan(ticks: int64) : TimeSpan
TimeSpan(hours: int, minutes: int, seconds: int) : TimeSpan
TimeSpan(days: int, hours: int, minutes: int, seconds: int) : TimeSpan
TimeSpan(days: int, hours: int, minutes: int, seconds: int, milliseconds: int) : TimeSpan
TimeSpan(days: int, hours: int, minutes: int, seconds: int, milliseconds: int, microseconds: int) : TimeSpan
val count: int
val dist: Normal
Multiple items
type Normal = interface IContinuousDistribution interface IUnivariateDistribution interface IDistribution new: unit -> unit + 3 overloads member CumulativeDistribution: x: float -> float member Density: x: float -> float member DensityLn: x: float -> float member InverseCumulativeDistribution: p: float -> float member Sample: unit -> float + 2 overloads member Samples: values: float array -> unit + 5 overloads ...
<summary> Continuous Univariate Normal distribution, also known as Gaussian distribution. For details about this distribution, see <a href="http://en.wikipedia.org/wiki/Normal_distribution">Wikipedia - Normal distribution</a>. </summary>

--------------------
Normal() : Normal
Normal(randomSource: Random) : Normal
Normal(mean: float, stddev: float) : Normal
Normal(mean: float, stddev: float, randomSource: Random) : Normal
Multiple items
type Random = new: unit -> unit + 1 overload member GetHexString: stringLength: int * ?lowercase: bool -> string + 1 overload member GetItems<'T> : choices: ReadOnlySpan<'T> * length: int -> 'T array + 2 overloads member GetString: choices: ReadOnlySpan<char> * length: int -> string member Next: unit -> int + 2 overloads member NextBytes: buffer: byte array -> unit + 1 overload member NextDouble: unit -> float member NextInt64: unit -> int64 + 2 overloads member NextSingle: unit -> float32 member Shuffle<'T> : values: Span<'T> -> unit + 1 overload ...
<summary>Represents a pseudo-random number generator, which is an algorithm that produces a sequence of numbers that meet certain statistical requirements for randomness.</summary>

--------------------
Random() : Random
Random(Seed: int) : Random
val dt: float
property TimeSpan.TotalDays: float with get
<summary>Gets the value of the current <see cref="T:System.TimeSpan" /> structure expressed in whole and fractional days.</summary>
<returns>The total number of days represented by this instance.</returns>
val driftExp: float
val pown: x: 'T -> n: int -> 'T (requires member One and member ( * ) and member (/))
val randExp: float
val sqrt: value: 'T -> 'U (requires member Sqrt)
module Seq from Microsoft.FSharp.Collections
val unfold: generator: ('State -> ('T * 'State) option) -> state: 'State -> 'T seq
val dt: DateTimeOffset
val price: float
val exp: value: 'T -> 'T (requires member Exp)
Normal.Sample() : float
union case Option.Some: Value: 'T -> Option<'T>
val take: count: int -> source: 'T seq -> 'T seq
val today: DateTimeOffset
Multiple items
type DateTime = new: date: DateOnly * time: TimeOnly -> unit + 16 overloads member Add: value: TimeSpan -> DateTime member AddDays: value: float -> DateTime member AddHours: value: float -> DateTime member AddMicroseconds: value: float -> DateTime member AddMilliseconds: value: float -> DateTime member AddMinutes: value: float -> DateTime member AddMonths: months: int -> DateTime member AddSeconds: value: float -> DateTime member AddTicks: value: int64 -> DateTime ...
<summary>Represents an instant in time, typically expressed as a date and time of day.</summary>

--------------------
DateTime ()
   (+0 other overloads)
DateTime(ticks: int64) : DateTime
   (+0 other overloads)
DateTime(date: DateOnly, time: TimeOnly) : DateTime
   (+0 other overloads)
DateTime(ticks: int64, kind: DateTimeKind) : DateTime
   (+0 other overloads)
DateTime(date: DateOnly, time: TimeOnly, kind: DateTimeKind) : DateTime
   (+0 other overloads)
DateTime(year: int, month: int, day: int) : DateTime
   (+0 other overloads)
DateTime(year: int, month: int, day: int, calendar: Globalization.Calendar) : DateTime
   (+0 other overloads)
DateTime(year: int, month: int, day: int, hour: int, minute: int, second: int) : DateTime
   (+0 other overloads)
DateTime(year: int, month: int, day: int, hour: int, minute: int, second: int, kind: DateTimeKind) : DateTime
   (+0 other overloads)
DateTime(year: int, month: int, day: int, hour: int, minute: int, second: int, calendar: Globalization.Calendar) : DateTime
   (+0 other overloads)
property DateTime.Today: DateTime with get
<summary>Gets the current date.</summary>
<returns>An object that is set to today's date, with the time component set to 00:00:00.</returns>
val stock1: (TimeSpan -> int -> (DateTimeOffset * float) seq)
val stock2: (TimeSpan -> int -> (DateTimeOffset * float) seq)
val s1: Series<DateTimeOffset,float>
val series: observations: ('a * 'b) seq -> Series<'a,'b> (requires equality)
<summary> Create a series from a sequence of key-value pairs that represent the observations of the series. This function can be used together with the `=&gt;` operator to create key-value pairs. </summary>
<example> // Creates a series with squares of numbers let sqs = series [ 1 =&gt; 1.0; 2 =&gt; 4.0; 3 =&gt; 9.0 ] </example>
val s2: Series<DateTimeOffset,float>
val s3: Series<DateTimeOffset,float>
member Series.Zip: otherSeries: Series<'K,'V2> -> Series<'K,('V opt * 'V2 opt)>
member Series.Zip: otherSeries: Series<'K,'V2> * kind: JoinKind -> Series<'K,('V opt * 'V2 opt)>
member Series.Zip: otherSeries: Series<'K,'V2> * kind: JoinKind * lookup: Lookup -> Series<'K,('V opt * 'V2 opt)>
type JoinKind = | Outer = 0 | Inner = 1 | Left = 2 | Right = 3
<summary> This enumeration specifies joining behavior for `Join` method provided by `Series` and `Frame`. Outer join unions the keys (and may introduce missing values), inner join takes the intersection of keys; left and right joins take the keys of the first or the second series/frame. </summary>
<category>Parameters and results of various operations</category>
JoinKind.Left: JoinKind = 2
<summary> Take the keys of the left (first) structure and align values from the right (second) structure with the keys of the first one. Values for keys not available in the second structure will be missing. </summary>
JoinKind.Right: JoinKind = 3
<summary> Take the keys of the right (second) structure and align values from the left (first) structure with the keys of the second one. Values for keys not available in the first structure will be missing. </summary>
type Lookup = | Exact = 1 | ExactOrGreater = 3 | ExactOrSmaller = 5 | Greater = 2 | Smaller = 4
<summary> Represents different behaviors of key lookup in series. For unordered series, the only available option is `Lookup.Exact` which finds the exact key - methods fail or return missing value if the key is not available in the index. For ordered series `Lookup.Greater` finds the first greater key (e.g. later date) with a value. `Lookup.Smaller` searches for the first smaller key. The options `Lookup.ExactOrGreater` and `Lookup.ExactOrSmaller` finds the exact key (if it is present) and otherwise search for the nearest larger or smaller key, respectively. </summary>
<category>Parameters and results of various operations</category>
Lookup.ExactOrSmaller: Lookup = 5
<summary> Lookup a value associated with the specified key or with the nearest smaller key that has a value available. Fails (or returns missing value) only when the specified key is smaller than all available keys. </summary>
val f1: Frame<DateTimeOffset,string>
Multiple items
module Frame from Deedle
<summary> The `Frame` module provides an F#-friendly API for working with data frames. The module follows the usual desing for collection-processing in F#, so the functions work well with the pipelining operator (`|&gt;`). For example, given a frame with two columns representing prices, we can use `Frame.pctChange` to calculate daily returns like this: let df = frame [ "MSFT" =&gt; prices1; "AAPL" =&gt; prices2 ] let rets = df |&gt; Frame.pctChange 1 rets |&gt; Stats.mean Note that the `Stats.mean` operation is overloaded and works both on series (returning a number) and on frames (returning a series). You can also use `Frame.diff` if you need absolute differences rather than relative changes. The functions in this module are designed to be used from F#. For a C#-friendly API, see the `FrameExtensions` type. For working with individual series, see the `Series` module. The functions in the `Frame` module are grouped in a number of categories and documented below. Accessing frame data and lookup ------------------------------- Functions in this category provide access to the values in the fame. You can also add and remove columns from a frame (which both return a new value). - `addCol`, `replaceCol` and `dropCol` can be used to create a new data frame with a new column, by replacing an existing column with a new one, or by dropping an existing column - `cols` and `rows` return the columns or rows of a frame as a series containing objects; `getCols` and `getRows` return a generic series and cast the values to the type inferred from the context (columns or rows of incompatible types are skipped); `getNumericCols` returns columns of a type convertible to `float` for convenience. - You can get a specific row or column using `get[Col|Row]` or `lookup[Col|Row]` functions. The `lookup` variant lets you specify lookup behavior for key matching (e.g. find the nearest smaller key than the specified value). There are also `[try]get` and `[try]Lookup` functions that return optional values and functions returning entire observations (key together with the series). - `sliceCols` and `sliceRows` return a sub-frame containing only the specified columns or rows. Finally, `toArray2D` returns the frame data as a 2D array. Grouping, windowing and chunking -------------------------------- The basic grouping functions in this category can be used to group the rows of a data frame by a specified projection or column to create a frame with hierarchical index such as <c>Frame&lt;'K1 * 'K2, 'C&gt;</c>. The functions always aggregate rows, so if you want to group columns, you need to use `Frame.transpose` first. The function `groupRowsBy` groups rows by the value of a specified column. Use `groupRowsBy[Int|Float|String...]` if you want to specify the type of the column in an easier way than using type inference; `groupRowsUsing` groups rows using the specified _projection function_ and `groupRowsByIndex` projects the grouping key just from the row index. More advanced functions include: `aggregateRowsBy` which groups the rows by a specified sequence of columns and aggregates each group into a single value; `pivotTable` implements the pivoting operation [as documented in the tutorials](../frame.html#pivot). The `melt` and `unmelt` functions turn the data frame into a single data frame containing columns `Row`, `Column` and `Value` containing the data of the original frame; `unmelt` can be used to turn this representation back into an original frame. The `stack` and `unstack` functions implement pandas-style reshape operations. `stack` converts `Frame&lt;'R,'C&gt;` to a long-format `Frame&lt;'R*'C, string&gt;` where each cell becomes a row keyed by `(rowKey, colKey)` with a single `"Value"` column. `unstack` promotes the inner row-key level to column keys, producing `Frame&lt;'R1, 'C*'R2&gt;` from `Frame&lt;'R1*'R2,'C&gt;`. A simple windowing functions that are exposed for an entire frame operations are `window` and `windowInto`. For more complex windowing operations, you currently have to use `mapRows` or `mapCols` and apply windowing on individual series. Sorting and index manipulation ------------------------------ A frame is indexed by row keys and column keys. Both of these indices can be sorted (by the keys). A frame that is sorted allows a number of additional operations (such as lookup using the `Lookp.ExactOrSmaller` lookup behavior). The functions in this category provide ways for manipulating the indices. It is expected that most operations are done on rows and so more functions are available in a row-wise way. A frame can alwyas be transposed using `Frame.transpose`. Index operations: The existing row/column keys can be replaced by a sequence of new keys using the `indexColsWith` and `indexRowsWith` functions. Row keys can also be replaced by ordinal numbers using `indexRowsOrdinally`. The function `indexRows` uses the specified column of the original frame as the index. It removes the column from the resulting frame (to avoid this, use overloaded `IndexRows` method). This function infers the type of row keys from the context, so it is usually more convenient to use `indexRows[Date|String|Int|...]` functions. Finally, if you want to calculate the index value based on multiple columns of the row, you can use `indexRowsUsing`. Sorting frame rows: Frame rows can be sorted according to the value of a specified column using the `sortRows` function; `sortRowsBy` takes a projection function which lets you transform the value of a column (e.g. to project a part of the value). The functions `sortRowsByKey` and `sortColsByKey` sort the rows or columns using the default ordering on the key values. The result is a frame with ordered index. Expanding columns: When the frame contains a series with complex .NET objects such as F# records or C# classes, it can be useful to "expand" the column. This operation looks at the type of the objects, gets all properties of the objects (recursively) and generates multiple series representing the properties as columns. The function `expandCols` expands the specified columns while `expandAllCols` applies the expansion to all columns of the data frame. Frame transformations --------------------- Functions in this category perform standard transformations on data frames including projections, filtering, taking some sub-frame of the frame, aggregating values using scanning and so on. Projection and filtering functions such as `[map|filter][Cols|Rows]` call the specified function with the column or row key and an <c>ObjectSeries&lt;'K&gt;</c> representing the column or row. You can use functions ending with `Values` (such as `mapRowValues`) when you do not require the row key, but only the row series; `mapRowKeys` and `mapColKeys` can be used to transform the keys. You can use `reduceValues` to apply a custom reduction to values of columns. Other aggregations are available in the `Stats` module. You can also get a row with the greaterst or smallest value of a given column using `[min|max]RowBy`. The functions `take[Last]` and `skip[Last]` can be used to take a sub-frame of the original source frame by skipping a specified number of rows. Note that this does not require an ordered frame and it ignores the index - for index-based lookup use slicing, such as `df.Rows.[lo .. hi]`, instead. Finally the `shift` function can be used to obtain a frame with values shifted by the specified offset. This can be used e.g. to get previous value for each key using `Frame.shift 1 df`. The `diff` function calculates difference from previous value using `df - (Frame.shift offs df)`. Processing frames with exceptions --------------------------------- The functions in this group can be used to write computations over frames that may fail. They use the type <c>tryval&lt;'T&gt;</c> which is defined as a discriminated union with two cases: Success containing a value, or Error containing an exception. Using <c>tryval&lt;'T&gt;</c> as a value in a data frame is not generally recommended, because the type of values cannot be tracked in the type. For this reason, it is better to use <c>tryval&lt;'T&gt;</c> with individual series. However, `tryValues` and `fillErrorsWith` functions can be used to get values, or fill failed values inside an entire data frame. The `tryMapRows` function is more useful. It can be used to write a transformation that applies a computation (which may fail) to each row of a data frame. The resulting series is of type <c>Series&lt;'R, tryval&lt;'T&gt;&gt;</c> and can be processed using the <c>Series</c> module functions. Missing values -------------- This group of functions provides a way of working with missing values in a data frame. The category provides the following functions that can be used to fill missing values: * `fillMissingWith` fills missing values with a specified constant * `fillMissingUsing` calls a specified function for every missing value * `fillMissing` and variants propagates values from previous/later keys We use the terms _sparse_ and _dense_ to denote series that contain some missing values or do not contain any missing values, respectively. The functions `denseCols` and `denseRows` return a series that contains only dense columns or rows and all sparse rows or columns are replaced with a missing value. The `dropSparseCols` and `dropSparseRows` functions drop these missing values and return a frame with no missing values. Joining, merging and zipping ---------------------------- The simplest way to join two frames is to use the `join` operation which can be used to perform left, right, outer or inner join of two frames. When the row keys of the frames do not match exactly, you can use `joinAlign` which takes an additional parameter that specifies how to find matching key in left/right join (e.g. by taking the nearest smaller available key). Frames that do not contian overlapping values can be combined using `merge` (when combining just two frames) or using `mergeAll` (for larger number of frames). Tha latter is optimized to work well for a large number of data frames. Finally, frames with overlapping values can be combined using `zip`. It takes a function that is used to combine the overlapping values. A `zipAlign` function provides a variant with more flexible row key matching (as in `joinAlign`) Hierarchical index operations ----------------------------- A data frame has a hierarchical row index if the row index is formed by a tuple, such as <c>Frame&lt;'R1 * 'R2, 'C&gt;</c>. Frames of this kind are returned, for example, by the grouping functions such as <c>Frame.groupRowsBy</c>. The functions in this category provide ways for working with data frames that have hierarchical row keys. The functions <c>applyLevel</c> and <c>reduceLevel</c> can be used to reduce values according to one of the levels. The <c>applyLevel</c> function takes a reduction of type <c>Series&lt;'K, 'T&gt; -&gt; 'T</c> while <c>reduceLevel</c> reduces individual values using a function of type <c>'T -&gt; 'T -&gt; 'T</c>. The functions <c>nest</c> and <c>unnest</c> can be used to convert between frames with hierarchical indices (<c>Frame&lt;'K1 * 'K2, 'C&gt;</c>) and series of frames that represent individual groups (<c>Series&lt;'K1, Frame&lt;'K2, 'C&gt;&gt;</c>). The <c>nestBy</c> function can be used to perform group by operation and return the result as a series of frems. </summary>
<category>Frame and series operations</category>


--------------------
type Frame = static member ReadCsv: location: string * hasHeaders: Nullable<bool> * inferTypes: Nullable<bool> * inferRows: Nullable<int> * schema: string * separators: string * culture: string * maxRows: Nullable<int> * missingValues: string array * preferOptions: bool * encoding: Encoding -> Frame<int,string> + 1 overload static member ReadReader: reader: IDataReader -> Frame<int,string> static member CustomExpanders: Dictionary<Type,Func<obj,(string * Type * obj) seq>> static member NonExpandableInterfaces: ResizeArray<Type> static member NonExpandableTypes: HashSet<Type>
<summary> Provides static methods for creating frames, reading frame data from CSV files and database (via IDataReader). The type also provides global configuration for reflection-based expansion. </summary>
<category>Frame and series operations</category>


--------------------
type Frame<'TRowKey,'TColumnKey (requires equality and equality)> = interface IDynamicMetaObjectProvider interface INotifyCollectionChanged interface IFrameFormattable interface IFsiFormattable interface IFrame new: rowIndex: IIndex<'TRowKey> * columnIndex: IIndex<'TColumnKey> * data: IVector<IVector> * indexBuilder: IIndexBuilder * vectorBuilder: IVectorBuilder -> Frame<'TRowKey,'TColumnKey> + 1 overload member AddColumn: column: 'TColumnKey * series: 'V seq -> unit + 3 overloads member AggregateRowsBy: groupBy: 'TColumnKey seq * aggBy: 'TColumnKey seq * aggFunc: Func<Series<'TRowKey,'a>,'b> -> Frame<int,'TColumnKey> member Clone: unit -> Frame<'TRowKey,'TColumnKey> member ColumnApply: f: Func<Series<'TRowKey,'T>,ISeries<'TRowKey>> -> Frame<'TRowKey,'TColumnKey> + 1 overload ...
<summary> A frame is the key Deedle data structure (together with series). It represents a data table (think spreadsheet or CSV file) with multiple rows and columns. The frame consists of row index, column index and data. The indices are used for efficient lookup when accessing data by the row key `'TRowKey` or by the column key `'TColumnKey`. Deedle frames are optimized for the scenario when all values in a given column are of the same type (but types of different columns can differ). </summary>
<remarks><para>Joining, zipping and appending:</para><para> More info </para></remarks>
<category>Core frame and series types</category>


--------------------
new: names: 'TColumnKey seq * columns: ISeries<'TRowKey> seq -> Frame<'TRowKey,'TColumnKey>
new: rowIndex: Indices.IIndex<'TRowKey> * columnIndex: Indices.IIndex<'TColumnKey> * data: IVector<IVector> * indexBuilder: Indices.IIndexBuilder * vectorBuilder: Vectors.IVectorBuilder -> Frame<'TRowKey,'TColumnKey>
static member Frame.ofColumns: cols: Series<'C,#ISeries<'R>> -> Frame<'R,'C> (requires equality and equality)
static member Frame.ofColumns: cols: ('C * #ISeries<'R>) seq -> Frame<'R,'C> (requires equality and equality)
val f2: Frame<DateTimeOffset,string>
val f3: Frame<DateTimeOffset,string>
member Frame.Join: otherFrame: Frame<'TRowKey,'TColumnKey> -> Frame<'TRowKey,'TColumnKey>
member Frame.Join: colKey: 'TColumnKey * series: Series<'TRowKey,'V> -> Frame<'TRowKey,'TColumnKey>
member Frame.Join: otherFrame: Frame<'TRowKey,'TColumnKey> * kind: JoinKind -> Frame<'TRowKey,'TColumnKey>
member Frame.Join: colKey: 'TColumnKey * series: Series<'TRowKey,'V> * kind: JoinKind -> Frame<'TRowKey,'TColumnKey>
member Frame.Join: otherFrame: Frame<'TRowKey,'TColumnKey> * kind: JoinKind * lookup: Lookup -> Frame<'TRowKey,'TColumnKey>
member Frame.Join: colKey: 'TColumnKey * series: Series<'TRowKey,'V> * kind: JoinKind * lookup: Lookup -> Frame<'TRowKey,'TColumnKey>
JoinKind.Outer: JoinKind = 0
<summary> Combine the keys available in both structures, align the values that are available in both of them and mark the remaining values as missing. </summary>
JoinKind.Inner: JoinKind = 1
<summary> Take the intersection of the keys available in both structures and align the values of the two structures. The resulting structure cannot contain missing values. </summary>
val join: kind: JoinKind -> frame1: Frame<'R,'C> -> frame2: Frame<'R,'C> -> Frame<'R,'C> (requires equality and equality)
<summary> Join two data frames. The columns of the joined frames must not overlap and their rows are aligned and transformed according to the specified join kind. For more alignment options on ordered frames, see `joinAlign`. </summary>
<param name="frame1">First data frame (left) to be used in the joining</param>
<param name="frame2">Other frame (right) to be joined with `frame1`</param>
<param name="kind">Specifies the joining behavior on row indices. Use `JoinKind.Outer` and `JoinKind.Inner` to get the union and intersection of the row keys, respectively. Use `JoinKind.Left` and `JoinKind.Right` to use the current key of the left/right data frame.</param>
<category>Joining, merging and zipping</category>
val joinAlign: kind: JoinKind -> lookup: Lookup -> frame1: Frame<'R,'C> -> frame2: Frame<'R,'C> -> Frame<'R,'C> (requires equality and equality)
<summary> Join two data frames. The columns of the joined frames must not overlap and their rows are aligned and transformed according to the specified join kind. When the index of both frames is ordered, it is possible to specify `lookup` in order to align indices from other frame to the indices of the main frame (typically, to find the nearest key with available value for a key). </summary>
<param name="frame1">First data frame (left) to be used in the joining</param>
<param name="frame2">Other frame (right) to be joined with `frame1`</param>
<param name="kind">Specifies the joining behavior on row indices. Use `JoinKind.Outer` and `JoinKind.Inner` to get the union and intersection of the row keys, respectively. Use `JoinKind.Left` and `JoinKind.Right` to use the current key of the left/right data frame.</param>
<param name="lookup">When `kind` is `Left` or `Right` and the two frames have ordered row index, this parameter can be used to specify how to find value for a key when there is no exactly matching key or when there are missing values.</param>
<category>Joining, merging and zipping</category>
val lf: Series<DateTimeOffset,float>
Multiple items
module Series from Deedle
<summary> The `Series` module provides an F#-friendly API for working with data and time series. The API follows the usual design for collection-processing in F#, so the functions work well with the pipelining (<c>|&gt;</c>) operator. For example, given a series with ages, we can use `Series.filterValues` to filter outliers and then `Stats.mean` to calculate the mean: ages |&gt; Series.filterValues (fun v -&gt; v &gt; 0.0 &amp;&amp; v &lt; 120.0) |&gt; Stats.mean The module provides comprehensive set of functions for working with series. The same API is also exposed using C#-friendly extension methods. In C#, the above snippet could be written as: [lang=csharp] ages .Where(kvp =&gt; kvp.Value &gt; 0.0 &amp;&amp; kvp.Value &lt; 120.0) .Mean() For more information about similar frame-manipulation functions, see the `Frame` module. For more information about C#-friendly extensions, see `SeriesExtensions`. The functions in the `Series` module are grouped in a number of categories and documented below. Accessing series data and lookup -------------------------------- Functions in this category provide access to the values in the series. - The term _observation_ is used for a key value pair in the series. - When working with a sorted series, it is possible to perform lookup using keys that are not present in the series - you can specify to search for the previous or next available value using _lookup behavior_. - Functions such as `get` and `getAll` have their counterparts `lookup` and `lookupAll` that let you specify lookup behavior. - For most of the functions that may fail, there is a `try[Foo]` variant that returns `None` instead of failing. - Functions with a name ending with `At` perform lookup based on the absolute integer offset (and ignore the keys of the series) Series transformations ---------------------- Functions in this category perform standard transformations on series including projections, filtering, taking some sub-series of the series, aggregating values using scanning and so on. Projection and filtering functions generally skip over missing values, but there are variants `filterAll` and `mapAll` that let you handle missing values explicitly. Keys can be transformed using `mapKeys`. When you do not need to consider the keys, and only care about values, use `filterValues` and `mapValues` (which is also aliased as the `$` operator). Series supports standard set of folding functions including `reduce` and `fold` (to reduce series values into a single value) as well as the `scan[All]` function, which can be used to fold values of a series into a series of intermeidate folding results. The functions `take[Last]` and `skip[Last]` can be used to take a sub-series of the original source series by skipping a specified number of elements. Note that this does not require an ordered series and it ignores the index - for index-based lookup use slicing, such as `series.[lo .. hi]`, instead. Finally the `shift` function can be used to obtain a series with values shifted by the specified offset. This can be used e.g. to get previous value for each key using `Series.shift 1 ts`. The `diff` function calculates difference from previous value using `ts - (Series.shift offs ts)`. Processing series with exceptions --------------------------------- The functions in this group can be used to write computations over series that may fail. They use the type <c>tryval&lt;'T&gt;</c> which is defined as a discriminated union with two cases: Success containing a value, or Error containing an exception. The function `tryMap` lets you create <c>Series&lt;'K, tryval&lt;'T&gt;&gt;</c> by mapping over values of an original series. You can then extract values using `tryValues`, which throws `AggregateException` if there were any errors. Functions `tryErrors` and `trySuccesses` give series containing only errors and successes. You can fill failed values with a constant using `fillErrorsWith`. Hierarchical index operations ----------------------------- When the key of a series is tuple, the elements of the tuple can be treated as multiple levels of a index. For example <c>Series&lt;'K1 * 'K2, 'V&gt;</c> has two levels with keys of types <c>'K1</c> and <c>'K2</c> respectively. The functions in this cateogry provide a way for aggregating values in the series at one of the levels. For example, given a series `input` indexed by two-element tuple, you can calculate mean for different first-level values as follows: input |&gt; applyLevel fst Stats.mean Note that the `Stats` module provides helpers for typical statistical operations, so the above could be written just as `input |&gt; Stats.levelMean fst`. Grouping, windowing and chunking -------------------------------- This category includes functions that group data from a series in some way. Two key concepts here are _window_ and _chunk_. Window refers to (overlapping) sliding windows over the input series while chunk refers to non-overlapping blocks of the series. The boundary behavior can be specified using the `Boundary` flags. The value `Skip` means that boundaries (incomplete windows or chunks) should be skipped. The value `AtBeginning` and `AtEnding` can be used to define at which side should the boundary be returned (or skipped). For chunking, `AtBeginning ||| Skip` makes sense and it means that the incomplete chunk at the beginning should be skipped (aligning the last chunk with the end). The behavior may be specified in a number of ways (which is reflected in the name): - `dist` - using an absolute distance between the keys - `while` - using a condition on the first and last key - `size` - by specifying the absolute size of the window/chunk The functions ending with `Into` take a function to be applied to the window/chunk. The functions `window`, `windowInto` and `chunk`, `chunkInto` are simplified versions that take a size. There is also `pairwise` function for sliding window of size two. Missing values -------------- This group of functions provides a way of working with missing values in a series. The `dropMissing` function drops all keys for which there are no values in the series. The `withMissingFrom` function lets you copy missing values from another series. The remaining functions provide different mechanism for filling the missing values. * `fillMissingWith` fills missing values with a specified constant * `fillMissingUsing` calls a specified function for every missing value * `fillMissing` and variants propagates values from previous/later keys Sorting and index manipulation ------------------------------ A series that is sorted by keys allows a number of additional operations (such as lookup using the `Lookp.ExactOrSmaller` lookup behavior). However, it is also possible to sort series based on the values - although the functions for manipulation with series do not guarantee that the order will be preserved. To sort series by keys, use `sortByKey`. Other sorting functions let you sort the series using a specified comparer function (`sortWith`), using a projection function (`sortBy`) and using the default comparison (`sort`). In addition, you can also replace the keys of a series with other keys using `indexWith` or with integers using `indexOrdinally`. To pick and reorder series values using to match a list of keys use `realign`. Sampling, resampling and advanced lookup ---------------------------------------- Given a (typically) time series sampling or resampling makes it possible to get time series with representative values at lower or uniform frequency. We use the following terminology: - `lookup` and `sample` functions find values at specified key; if a key is not available, they can look for value associated with the nearest smaller or the nearest greater key. - `resample` function aggregate values values into chunks based on a specified collection of keys (e.g. explicitly provided times), or based on some relation between keys (e.g. date times having the same date). - `resampleUniform` is similar to resampling, but we specify keys by providing functions that generate a uniform sequence of keys (e.g. days), the operation also fills value for days that have no corresponding observations in the input sequence. Joining, merging and zipping ---------------------------- Given two series, there are two ways to combine the values. If the keys in the series are not overlapping (or you want to throw away values from one or the other series), then you can use `merge` or `mergeUsing`. To merge more than 2 series efficiently, use the `mergeAll` function, which has been optimized for large number of series. If you want to align two series, you can use the _zipping_ operation. This aligns two series based on their keys and gives you tuples of values. The default behavior (`zip`) uses outer join and exact matching. For ordered series, you can specify other forms of key lookups (e.g. find the greatest smaller key) using `zipAlign`. functions ending with `Into` are generally easier to use as they call a specified function to turn the tuple (of possibly missing values) into a new value. For more complicated behaviors, it is often convenient to use joins on frames instead of working with series. Create two frames with single columns and then use the join operation. The result will be a frame with two columns (which is easier to use than series of tuples). </summary>
<category>Frame and series operations</category>


--------------------
type Series = static member ofNullables: values: Nullable<'a> seq -> Series<int,'a> (requires default constructor and value type and 'a :> ValueType) static member ofObservations: observations: ('a * 'b) seq -> Series<'a,'b> (requires equality) static member ofOptionalObservations: observations: ('K * 'a option) seq -> Series<'K,'a> (requires equality) static member ofValues: values: 'a seq -> Series<int,'a>

--------------------
type Series<'K,'V (requires equality)> = interface ISeriesFormattable interface IFsiFormattable interface ISeries<'K> new: index: IIndex<'K> * vector: IVector<'V> * vectorBuilder: IVectorBuilder * indexBuilder: IIndexBuilder -> Series<'K,'V> + 3 overloads member After: lowerExclusive: 'K -> Series<'K,'V> member Aggregate: aggregation: Aggregation<'K> * keySelector: Func<DataSegment<Series<'K,'V>>,'TNewKey> * valueSelector: Func<DataSegment<Series<'K,'V>>,OptionalValue<'R>> -> Series<'TNewKey,'R> (requires equality) + 1 overload member AsyncMaterialize: unit -> Async<Series<'K,'V>> member Before: upperExclusive: 'K -> Series<'K,'V> member Between: lowerInclusive: 'K * upperInclusive: 'K -> Series<'K,'V> member Compare: another: Series<'K,'V> -> Series<'K,Diff<'V>> ...
<summary> The type <c>Series&lt;K, V&gt;</c> represents a data series consisting of values `V` indexed by keys `K`. The keys of a series may or may not be ordered </summary>
<category>Core frame and series types</category>


--------------------
new: pairs: Collections.Generic.KeyValuePair<'K,'V> seq -> Series<'K,'V>
new: keys: 'K seq * values: 'V seq -> Series<'K,'V>
new: keys: 'K array * values: 'V array -> Series<'K,'V>
new: index: Indices.IIndex<'K> * vector: IVector<'V> * vectorBuilder: Vectors.IVectorBuilder * indexBuilder: Indices.IIndexBuilder -> Series<'K,'V>
val window: size: int -> series: Series<'K,'T> -> Series<'K,Series<'K,'T>> (requires equality)
<summary> Creates a sliding window using the specified size and returns the produced windows as a nested series. The key in the new series is the last key of the window. This function skips incomplete chunks - you can use `Series.windowSize` for more options. </summary>
<param name="size">The size of the sliding window.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val windowInto: size: int -> f: (Series<'K,'T> -> 'R) -> series: Series<'K,'T> -> Series<'K,'R> (requires equality)
<summary> Creates a sliding window using the specified size and then applies the provided value selector `f` on each window to produce the result which is returned as a new series. This function skips incomplete chunks - you can use `Series.windowSizeInto` for more options. </summary>
<param name="size">The size of the sliding window.</param>
<param name="series">The input series to be aggregated.</param>
<param name="f">A function that is called on each created window.</param>
<category>Grouping, windowing and chunking</category>
type Stats = static member corr: series1: Series<'K,'V1> -> series2: Series<'K,'V2> -> float (requires equality) static member corrFrame: frame: Frame<'R,'C> -> Frame<'C,'C> (requires equality and equality) static member count: series: Series<'K,'V> -> int (requires equality) + 1 overload static member cov: series1: Series<'K,'V1> -> series2: Series<'K,'V2> -> float (requires equality) static member covFrame: frame: Frame<'R,'C> -> Frame<'C,'C> (requires equality and equality) static member describe: series: Series<'K,'V> -> Series<string,float> (requires equality and equality) + 1 overload static member expandingCount: series: Series<'K,'V> -> Series<'K,float> (requires equality) static member expandingKurt: series: Series<'K,'V> -> Series<'K,float> (requires equality) static member expandingMax: series: Series<'K,'V> -> Series<'K,float> (requires equality) static member expandingMean: series: Series<'K,'V> -> Series<'K,float> (requires equality) ...
static member Stats.mean: frame: Frame<'R,'C> -> Series<'C,float> (requires equality and equality)
static member Stats.mean: series: Series<'K,'V> -> float (requires equality)
val firstValue: series: Series<'K,'V> -> 'V (requires equality)
<summary> Returns the first value of the series. This fails if the first value is missing. </summary>
<category>Accessing series data and lookup</category>
val lfm2: Series<DateTimeOffset,float>
val windowSizeInto: int * Boundary -> f: (DataSegment<Series<'K,'T>> -> 'R) -> series: Series<'K,'T> -> Series<'K,'R> (requires equality)
<summary> Creates a sliding window using the specified size and boundary behavior and then applies the provided value selector `f` on each window to produce the result which is returned as a new series. The key is the last key of the window, unless boundary behavior is `Boundary.AtEnding` (in which case it is the first key). </summary>
<param name="bounds">Specifies the window size and bounary behavior. The boundary behavior can be `Boundary.Skip` (meaning that no incomplete windows are produced), `Boundary.AtBeginning` (meaning that incomplete windows are produced at the beginning) or `Boundary.AtEnding` (to produce incomplete windows at the end of series)</param>
<param name="f">A value selector that is called to aggregate each window.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
type Boundary = | AtBeginning = 1 | AtEnding = 2 | Skip = 4
<summary> Represents boundary behaviour for operations such as floating window. The type specifies whether incomplete windows (of smaller than required length) should be produced at the beginning (`AtBeginning`) or at the end (`AtEnding`) or skipped (`Skip`). For chunking, combinations are allowed too - to skip incomplete chunk at the beginning, use `Boundary.Skip ||| Boundary.AtBeginning`. </summary>
<category>Parameters and results of various operations</category>
Boundary.AtBeginning: Boundary = 1
val ds: DataSegment<Series<DateTimeOffset,float>>
property DataSegment.Data: Series<DateTimeOffset,float> with get
<summary> Returns the data associated with the segment (for boundary segment, this may be smaller than the required window size) </summary>
val st: Series<int,char>
static member Series.ofValues: values: 'a seq -> Series<int,'a>
Boundary.AtEnding: Boundary = 2
Multiple items
union case DataSegment.DataSegment: DataSegmentKind * 'T -> DataSegment<'T>

--------------------
module DataSegment from Deedle
<summary> Provides helper functions and active patterns for working with `DataSegment` values </summary>
<category>Parameters and results of various operations</category>


--------------------
type DataSegment<'T> = | DataSegment of DataSegmentKind * 'T override ToString: unit -> string member Data: 'T member Kind: DataSegmentKind
<summary> Represents a segment of a series or sequence. The value is returned from various functions that aggregate data into chunks or floating windows. The `Complete` case represents complete segment (e.g. of the specified size) and `Boundary` represents segment at the boundary (e.g. smaller than the required size). </summary>
<example> For example (using internal `windowed` function): <code> open Deedle.Internal Seq.windowedWithBounds 3 Boundary.AtBeginning [ 1; 2; 3; 4 ] // [| DataSegment(Incomplete, [| 1 |]) ] // DataSegment(Incomplete, [| 1; 2 |]) ] // DataSegment(Complete [| 1; 2; 3 |]) ] // DataSegment(Complete [| 2; 3; 4 |]) |] </code> If you do not need to distinguish the two cases, you can use the `Data` property to get the array representing the segment data. </example>
<category>Parameters and results of various operations</category>
active recognizer Complete: DataSegment<'a> -> Choice<'a,'a>
<summary> Complete active pattern that makes it possible to write functions that behave differently for complete and incomplete segments. For example, the following returns zero for incomplete segments: let sumSegmentOrZero = function | DataSegment.Complete(value) -&gt; Stats.sum value | DataSegment.Incomplete _ -&gt; 0.0 </summary>
val ser: Series<int,char>
Multiple items
type String = interface seq<char> interface IEnumerable interface ICloneable interface IComparable interface IComparable<string> interface IConvertible interface IEquatable<string> interface IParsable<string> interface ISpanParsable<string> new: value: nativeptr<char> -> unit + 8 overloads ...
<summary>Represents text as a sequence of UTF-16 code units.</summary>

--------------------
String(value: nativeptr<char>) : String
String(value: char array) : String
String(value: ReadOnlySpan<char>) : String
String(value: nativeptr<sbyte>) : String
String(c: char, count: int) : String
String(value: nativeptr<char>, startIndex: int, length: int) : String
String(value: char array, startIndex: int, length: int) : String
String(value: nativeptr<sbyte>, startIndex: int, length: int) : String
String(value: nativeptr<sbyte>, startIndex: int, length: int, enc: Text.Encoding) : String
val values: series: Series<'K,'T> -> 'T seq (requires equality)
<summary> Returns the (non-missing) values of the series as a sequence </summary>
<category>Accessing series data and lookup</category>
type Array = interface ICollection interface IEnumerable interface IList interface IStructuralComparable interface IStructuralEquatable interface ICloneable member Clone: unit -> obj member CopyTo: array: Array * index: int -> unit + 1 overload member GetEnumerator: unit -> IEnumerator member GetLength: dimension: int -> int ...
<summary>Provides methods for creating, manipulating, searching, and sorting arrays, thereby serving as the base class for all arrays in the common language runtime.</summary>
val ofSeq: source: 'T seq -> 'T array
active recognizer Incomplete: DataSegment<'a> -> Choice<'a,'a>
<summary> Complete active pattern that makes it possible to write functions that behave differently for complete and incomplete segments. For example, the following returns zero for incomplete segments: let sumSegmentOrZero = function | DataSegment.Complete(value) -&gt; Stats.sum value | DataSegment.Incomplete _ -&gt; 0.0 </summary>
val hourly: Series<DateTimeOffset,float>
val windowDist: distance: 'D -> series: Series<'K,'T> -> Series<'K,Series<'K,'T>> (requires comparison and equality and member (-))
<summary> Creates a sliding window based on distance between keys. A window is started at each input element and ends once the distance between the first and the last key is greater than the specified `distance`. The windows are then returned as a nested series. The key of each window is the key of the first element in the window. </summary>
<param name="distance">The maximal allowed distance between keys of a window. Note that this is an inline function - there must be `-` operator defined between `distance` and the keys of the series.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val windowWhile: cond: ('K -> 'K -> bool) -> series: Series<'K,'T> -> Series<'K,Series<'K,'T>> (requires equality)
<summary> Creates a sliding window based on a condition on keys. A window is started at each input element and ends once the specified `cond` function returns `false` when called on the first and the last key of the window. The windows are then returned as a nested series. The key of each window is the key of the first element in the window. </summary>
<param name="cond">A function that is called on the first and the last key of a window to determine when a window should end.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val d1: DateTimeOffset
val d2: DateTimeOffset
property DateTimeOffset.Date: DateTime with get
<summary>Gets a <see cref="T:System.DateTime" /> value that represents the date component of the current <see cref="T:System.DateTimeOffset" /> object.</summary>
<returns>A <see cref="T:System.DateTime" /> value that represents the date component of the current <see cref="T:System.DateTimeOffset" /> object.</returns>
val hf: Series<DateTimeOffset,float>
val chunkSize: int * Boundary -> series: Series<'K,'T> -> Series<'K,Series<'K,'T>> (requires equality)
<summary> Aggregates the input into a series of adacent chunks using the specified size and boundary behavior and returns the produced chunks as a nested series. The key is the first key of the chunk, unless boundary behavior has `Boundary.AtBeginning` flag (in which case it is the last key). </summary>
<param name="bounds">Specifies the chunk size and bounary behavior. The boundary behavior can be `Boundary.Skip` (meaning that no incomplete chunks are produced), `Boundary.AtBeginning` (meaning that incomplete chunks are produced at the beginning) or `Boundary.AtEnding` (to produce incomplete chunks at the end of series)</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val chunkDistInto: distance: 'D -> f: (Series<'K,'T> -> 'R) -> series: Series<'K,'T> -> Series<'K,'R> (requires comparison and equality and member (-))
<summary> Aggregates the input into a series of adacent chunks. A chunk is started once the distance between the first and the last key of a previous chunk is greater than the specified `distance`. Each chunk is then aggregated into a value using the specified function `f`. The key of each chunk is the key of the first element in the chunk. </summary>
<param name="distance">The maximal allowed distance between keys of a chunk. Note that this is an inline function - there must be `-` operator defined between `distance` and the keys of the series.</param>
<param name="f">A value selector that is called to aggregate each chunk.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val chunkWhile: cond: ('K -> 'K -> bool) -> series: Series<'K,'T> -> Series<'K,Series<'K,'T>> (requires equality)
<summary> Aggregates the input into a series of adacent chunks based on a condition on keys. A chunk is started once the specified `cond` function returns `false` when called on the first and the last key of the previous chunk. The chunks are then returned as a nested series. The key of each chunk is the key of the first element in the chunk. </summary>
<param name="cond">A function that is called on the first and the last key of a chunk to determine when a window should end.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val k1: DateTimeOffset
val k2: DateTimeOffset
property DateTimeOffset.Hour: int with get
<summary>Gets the hour component of the time represented by the current <see cref="T:System.DateTimeOffset" /> object.</summary>
<returns>The hour component of the current <see cref="T:System.DateTimeOffset" /> object. This property uses a 24-hour clock; the value ranges from 0 to 23.</returns>
property DateTimeOffset.Minute: int with get
<summary>Gets the minute component of the time represented by the current <see cref="T:System.DateTimeOffset" /> object.</summary>
<returns>The minute component of the current <see cref="T:System.DateTimeOffset" /> object, expressed as an integer between 0 and 59.</returns>
val pairwise: series: Series<'K,'T> -> Series<'K,('T * 'T)> (requires equality)
<summary> Returns a series containing the predecessor and an element for each input, except for the first one. The returned series is one key shorter (it does not contain a value for the first key). </summary>
<param name="series">The input series to be aggregated.</param>
<example><code> let input = series [ 1 =&gt; 'a'; 2 =&gt; 'b'; 3 =&gt; 'c'] let res = input |&gt; Series.pairwise res = series [2 =&gt; ('a', 'b'); 3 =&gt; ('b', 'c') ] </code></example>
<category>Grouping, windowing and chunking</category>
val pairwiseWith: f: ('K -> 'T * 'T -> 'a) -> series: Series<'K,'T> -> Series<'K,'a> (requires equality)
<summary> Aggregates the input into pairs containing the predecessor and an element for each input, except for the first one. Then calls the specified aggregation function `f` with a tuple and a key. The returned series is one key shorter (it does not contain a value for the first key). </summary>
<param name="f">A function that is called for each pair to produce result in the final series.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val k: DateTimeOffset
val v1: float
val v2: float
val mf: Series<DateTimeOffset,float>
TimeSpan.FromSeconds(seconds: int64) : TimeSpan
TimeSpan.FromSeconds(value: float) : TimeSpan
TimeSpan.FromSeconds(seconds: int64, ?milliseconds: int64, ?microseconds: int64) : TimeSpan
val keys: DateTimeOffset list
val m: float
DateTimeOffset.AddMinutes(minutes: float) : DateTimeOffset
val lookupAll: keys: 'K seq -> lookup: Lookup -> series: Series<'K,'T> -> Series<'K,'T> (requires equality)
<summary> Create a new series that contains values for all provided keys. Use the specified lookup semantics - for exact matching, use `getAll` </summary>
<param name="keys">A sequence of keys that will form the keys of the retunred sequence</param>
<param name="lookup">Lookup behavior to use when the value at the specified key does not exist</param>
<param name="series">The input series</param>
<category>Accessing series data and lookup</category>
Lookup.ExactOrGreater: Lookup = 3
<summary> Lookup a value associated with the specified key or with the nearest greater key that has a value available. Fails (or returns missing value) only when the specified key is greater than all available keys. </summary>
val resample: keys: 'K seq -> dir: Direction -> series: Series<'K,'V> -> Series<'K,Series<'K,'V>> (requires equality)
<summary> Resample the series based on a provided collection of keys. The values of the series are aggregated into chunks based on the specified keys. Depending on `direction`, the specified key is either used as the smallest or as the greatest key of the chunk (with the exception of boundaries that are added to the first/last chunk). Such chunks are then returned as nested series. </summary>
<param name="series">An input series to be resampled</param>
<param name="keys">A collection of keys to be used for resampling of the series</param>
<param name="dir">If this parameter is `Direction.Forward`, then each key is used as the smallest key in a chunk; for `Direction.Backward`, the keys are used as the greatest keys in a chunk.</param>
<remarks> This operation is only supported on ordered series. The method throws `InvalidOperationException` when the series is not ordered. </remarks>
<category>Sampling, resampling and advanced lookup</category>
type Direction = | Backward = 0 | Forward = 1
<summary> Specifies in which direction should we look when performing operations such as `Series.Pairwise`. </summary>
<example><code> let abc = [ 1 =&gt; "a"; 2 =&gt; "b"; 3 =&gt; "c" ] |&gt; Series.ofObservations // Using 'Forward' the key of the first element is used abc.Pairwise(direction=Direction.Forward) // [ 1 =&gt; ("a", "b"); 2 =&gt; ("b", "c") ] // Using 'Backward' the key of the second element is used abc.Pairwise(direction=Direction.Backward) // [ 2 =&gt; ("a", "b"); 3 =&gt; ("b", "c") ] </code></example>
<category>Parameters and results of various operations</category>
Direction.Forward: Direction = 1
val resampleInto: keys: 'K seq -> dir: Direction -> f: ('K -> Series<'K,'V> -> 'a) -> series: Series<'K,'V> -> Series<'K,'a> (requires equality)
<summary> Resample the series based on a provided collection of keys. The values of the series are aggregated into chunks based on the specified keys. Depending on `direction`, the specified key is either used as the smallest or as the greatest key of the chunk (with the exception of boundaries that are added to the first/last chunk). Such chunks are then aggregated using the provided function `f`. </summary>
<param name="series">An input series to be resampled</param>
<param name="keys">A collection of keys to be used for resampling of the series</param>
<param name="dir">If this parameter is `Direction.Forward`, then each key is used as the smallest key in a chunk; for `Direction.Backward`, the keys are used as the greatest keys in a chunk.</param>
<param name="f">A function that is used to collapse a generated chunk into a single value. Note that this function may be called with empty series.</param>
<remarks> This operation is only supported on ordered series. The method throws `InvalidOperationException` when the series is not ordered. </remarks>
<category>Sampling, resampling and advanced lookup</category>
Direction.Backward: Direction = 0
val s: Series<DateTimeOffset,float>
val ds: Series<DateTimeOffset,float>
TimeSpan.FromHours(hours: int) : TimeSpan
TimeSpan.FromHours(value: float) : TimeSpan
TimeSpan.FromHours(hours: int, ?minutes: int64, ?seconds: int64, ?milliseconds: int64, ?microseconds: int64) : TimeSpan
val resampleEquiv: keyProj: ('K1 -> 'K2) -> series: Series<'K1,'V1> -> Series<'K2,Series<'K1,'V1>> (requires equality and equality)
<summary> Resample the series based on equivalence class on the keys. A specified function `keyProj` is used to project keys to another space and the observations for which the projected keys are equivalent are grouped into chunks. The chunks are then returned as nested series. </summary>
<param name="series">An input series to be resampled</param>
<param name="keyProj">A function that transforms keys from original space to a new space (which is then used for grouping based on equivalence)</param>
<remarks> This function is similar to `Series.chunkBy`, with the exception that it transforms keys to a new space. This operation is only supported on ordered series. The method throws `InvalidOperationException` when the series is not ordered. For unordered series, similar functionality can be implemented using `Series.groupBy`. </remarks>
<category>Sampling, resampling and advanced lookup</category>
val d: DateTimeOffset
static member SeriesExtensions.ResampleEquivalence: series: Series<'K,'V> * keyProj: Func<'K,'a> -> Series<'a,Series<'K,'V>> (requires equality and equality)
static member SeriesExtensions.ResampleEquivalence: series: Series<'K,'V> * keyProj: Func<'K,'a> * aggregate: Func<Series<'K,'V>,'b> -> Series<'a,'b> (requires equality and equality)
val days: string list
val nu: Series<DateTimeOffset,float>
val indexWith: keys: 'K2 seq -> series: Series<'K1,'T> -> Series<'K2,'T> (requires equality and equality)
<summary> Returns a new series containing the specified keys mapped to the original values of the series. When the sequence contains _fewer_ keys, the values from the series are dropped. When it contains _more_ keys, the values for additional keys are missing. </summary>
<category>Sorting and index manipulation</category>
val mapKeys: f: ('K -> 'R) -> series: Series<'K,'T> -> Series<'R,'T> (requires equality and equality)
<summary> Returns a new series whose keys are the results of applying the given function to keys of the original series. </summary>
<category>Series transformations</category>
DateTimeOffset.Parse(input: string) : DateTimeOffset
DateTimeOffset.Parse(input: string, formatProvider: IFormatProvider) : DateTimeOffset
DateTimeOffset.Parse(s: ReadOnlySpan<char>, provider: IFormatProvider) : DateTimeOffset
DateTimeOffset.Parse(input: string, formatProvider: IFormatProvider, styles: Globalization.DateTimeStyles) : DateTimeOffset
DateTimeOffset.Parse(input: ReadOnlySpan<char>, ?formatProvider: IFormatProvider, ?styles: Globalization.DateTimeStyles) : DateTimeOffset
val sampled: Series<DateTime,Series<DateTimeOffset,float>>
val resampleUniform: fillMode: Lookup -> keyProj: ('K1 -> 'K2) -> nextKey: ('K2 -> 'K2) -> series: Series<'K1,'V> -> Series<'K2,Series<'K1,'V>> (requires equality and comparison)
<summary> Resample the series based on equivalence class on the keys and also generate values for all keys of the target space that are between the minimal and maximal key of the specified series (e.g. generate value for all days in the range covered by the series). A specified function `keyProj` is used to project keys to another space and `nextKey` is used to generate all keys in the range. Then return the chunks as nested series. When there are no values for a (generated) key, then the function behaves according to `fillMode`. It can look at the greatest value of previous chunk or smallest value of the next chunk, or it produces an empty series. </summary>
<param name="series">An input series to be resampled</param>
<param name="fillMode">When set to `Lookup.NearestSmaller` or `Lookup.NearestGreater`, the function searches for a nearest available observation in an neighboring chunk. Otherwise, the function `f` is called with an empty series as an argument.</param>
<param name="keyProj">A function that transforms keys from original space to a new space (which is then used for grouping based on equivalence)</param>
<param name="nextKey">A function that gets the next key in the transformed space</param>
<remarks> This operation is only supported on ordered series. The method throws `InvalidOperationException` when the series is not ordered. </remarks>
<category>Sampling, resampling and advanced lookup</category>
val dt: DateTime
DateTime.AddDays(value: float) : DateTime
val mapValues: f: ('T -> 'R) -> series: Series<'K,'T> -> Series<'K,'R> (requires equality)
<summary> Returns a new series whose values are the results of applying the given function to values of the original series. This function skips over missing values and call the function with just values. It is also aliased using the `$` operator so you can write `series $ func` for `series |&gt; Series.mapValues func`. </summary>
<category>Series transformations</category>
val indexOrdinally: series: Series<'K,'T> -> Series<int,'T> (requires equality)
<summary> Return a new series containing the same values as the original series, but with ordinal index formed by `int` values starting from 0. </summary>
<category>Sorting and index manipulation</category>
static member Frame.ofRows: rows: ('R * #ISeries<'C>) seq -> Frame<'R,'C> (requires equality and equality)
static member Frame.ofRows: rows: Series<'R,#ISeries<'C>> -> Frame<'R,'C> (requires equality and equality)
val pr: Series<DateTimeOffset,float>
val sampleTime: interval: TimeSpan -> dir: Direction -> series: Series<'a,'b> -> Series<'a,Series<'a,'b>> (requires equality and member (+))
<summary> Performs sampling by time and returns chunks obtained by time-sampling as a nested series. The operation generates keys starting at the first key in the source series, using the specified `interval` and then obtains chunks based on these keys in a fashion similar to the `Series.resample` function. </summary>
<param name="series">An input series to be resampled</param>
<param name="interval">The interval between the individual samples</param>
<param name="dir">If this parameter is `Direction.Forward`, then each key is used as the smallest key in a chunk; for `Direction.Backward`, the keys are used as the greatest keys in a chunk.</param>
<remarks> This operation is only supported on ordered series. The method throws `InvalidOperationException` when the series is not ordered. </remarks>
<category>Sampling, resampling and advanced lookup</category>
val sampleTimeInto: interval: TimeSpan -> dir: Direction -> f: (Series<'K,'V> -> 'a) -> series: Series<'K,'V> -> Series<'K,'a> (requires equality and member (+))
<summary> Performs sampling by time and aggregates chunks obtained by time-sampling into a single value using a specified function. The operation generates keys starting at the first key in the source series, using the specified `interval` and then obtains chunks based on these keys in a fashion similar to the `Series.resample` function. </summary>
<param name="series">An input series to be resampled</param>
<param name="interval">The interval between the individual samples</param>
<param name="dir">If this parameter is `Direction.Forward`, then each key is used as the smallest key in a chunk; for `Direction.Backward`, the keys are used as the greatest keys in a chunk.</param>
<param name="f">A function that is called to aggregate each chunk into a single value.</param>
<remarks> This operation is only supported on ordered series. The method throws `InvalidOperationException` when the series is not ordered. </remarks>
<category>Sampling, resampling and advanced lookup</category>
val lastValue: series: Series<'K,'V> -> 'V (requires equality)
<summary> Returns the last value of the series. This fails if the last value is missing. </summary>
<category>Accessing series data and lookup</category>
val sample: Series<DateTimeOffset,float>
val diff1: Series<DateTimeOffset,float>
val diff: offset: int -> series: Series<'K,'T> -> Series<'K,'T> (requires equality and member (-))
<summary> Returns a series containing difference between a value in the original series and a value at the specified offset. For example, calling `Series.diff 1 s` returns a series where previous value is subtracted from the current one. In pseudo-code, the function behaves as follows: result[k] = series[k] - series[k - offset] </summary>
<param name="offset">When positive, subtracts the past values from the current values; when negative, subtracts the future values from the current values.</param>
<param name="series">The input series, containing values that support the `-` operator.</param>
<category>Series transformations</category>
val shift1: Series<DateTimeOffset,float>
val shift: offset: int -> series: Series<'K,'T> -> Series<'K,'T> (requires equality)
<summary> Returns a series with values shifted by the specified offset. When the offset is positive, the values are shifted forward and first `offset` keys are dropped. When the offset is negative, the values are shifted backwards and the last `offset` keys are dropped. Expressed in pseudo-code: result[k] = series[k - offset] </summary>
<param name="offset">Can be both positive and negative number.</param>
<param name="series">The input series to be shifted.</param>
<remarks> If you want to calculate the difference, e.g. `s - (Series.shift 1 s)`, you can use `Series.diff` which will be a little bit faster. </remarks>
<category>Series transformations</category>
val alignedDf: Frame<DateTimeOffset,string>
static member SeriesExtensions.Shift: series: Series<'K,'V> * offset: int -> Series<'K,'V> (requires equality)
val log: value: 'T -> 'T (requires member Log)
static member SeriesExtensions.Diff: series: Series<'K,float> * offset: int -> Series<'K,float> (requires equality)
val abs: value: 'T -> 'T (requires member Abs)
val adjust: v: float -> float
val v: float
val min: e1: 'T -> e2: 'T -> 'T (requires comparison)
val max: e1: 'T -> e2: 'T -> 'T (requires comparison)
static member Stats.sum: frame: Frame<'R,'C> -> Series<'C,float> (requires equality and equality)
static member Stats.sum: series: Series<'K,'V> -> float (requires equality)

Type something to start searching.