Header menu logo Deedle

Working with series and time series

In this section, we look at F# data frame library features that are useful when working with time series data or, more generally, any ordered series. Although we mainly look at operations on the Series type, many of the operations can be applied to data frame Frame containing multiple series. Furthermore, data frame provides an elegant way for aligning and joining series.

Generating input data

For the purpose of this tutorial, we'll need some input data. We use a function which generates random prices using the geometric Brownian motion.

/// Generates price using geometric Brownian motion
///  - 'seed' specifies the seed for random number generator
///  - 'drift' and 'volatility' set properties of the price movement
///  - 'initial' and 'start' specify the initial price and date
///  - 'span' specifies time span between individual observations
///  - 'count' is the number of required values to generate
let randomPrice seed drift volatility initial start span count = 
  let dist = Normal(0.0, 1.0, RandomSource=Random(seed))  
  let dt = (span:TimeSpan).TotalDays / 250.0
  let driftExp = (drift - 0.5 * pown volatility 2) * dt
  let randExp = volatility * (sqrt dt)
  ((start:DateTimeOffset), initial) |> Seq.unfold (fun (dt, price) ->
    let price = price * exp (driftExp + randExp * dist.Sample()) 
    Some((dt, price), (dt + span, price))) |> Seq.take count

// 12:00 AM today, in current time zone
let today = DateTimeOffset(DateTime.Today)
let stock1 = randomPrice 1 0.1 3.0 20.0 today 
let stock2 = randomPrice 2 0.2 1.5 22.0 today

To get random prices, we now only need to call stock1 or stock2 with TimeSpan and the required number of prices.

Data alignment and zipping

One of the key features of the data frame library for working with time series data is automatic alignment based on the keys. When we have multiple time series with date as the key (here, we use DateTimeOffset), we can combine multiple series and align them automatically to specified date keys.

To demonstrate this feature, we generate random prices in 60 minute, 30 minute and 65 minute intervals:

let s1 = stock1 (TimeSpan(1, 0, 0)) 6 |> series
let s2 = stock2 (TimeSpan(0, 30, 0)) 12 |> series
let s3 = stock1 (TimeSpan(1, 5, 0)) 6 |> series

Zipping time series

A series exposes Zip operation that can combine multiple series into a single series of pairs.

// Match values from right series to keys of the left one
s1.Zip(s2, JoinKind.Left)

// Match values from the left series to keys of the right one
s1.Zip(s2, JoinKind.Right)

// Use left series key and find the nearest previous value from the right series
s1.Zip(s2, JoinKind.Left, Lookup.ExactOrSmaller)

Joining data frames

When we store data in data frames, we can simply use a data frame with multiple columns instead of series of tuples. Let's first create three data frames:

// Contains value for each hour
let f1 = Frame.ofColumns ["S1" => s1]
// Contains value every 30 minutes
let f2 = Frame.ofColumns ["S2" => s2]
// Contains values with 65 minute offsets
let f3 = Frame.ofColumns ["S3" => s3]

// Union keys from both frames and align corresponding values
f1.Join(f2, JoinKind.Outer)

// Take only keys where both frames contain all values
f2.Join(f3, JoinKind.Inner)

// Take keys from the left frame and find nearest smaller value from the right frame
f2.Join(f3, JoinKind.Left, Lookup.ExactOrSmaller)

// Equivalent using function syntax 
Frame.join JoinKind.Outer f1 f2
Frame.joinAlign JoinKind.Left Lookup.ExactOrSmaller f1 f2

Windowing, chunking and pairwise

Windowing and chunking are two operations on ordered series that allow aggregating the values of series into groups. Both of these operations work on consecutive elements, which contrasts with grouping that does not use order.

Sliding windows

// Create input series with 6 observations
let lf = stock1 (TimeSpan(0, 1, 0)) 6 |> series

// Create series of series representing individual windows
lf |> Series.window 4
// Aggregate each window using 'Stats.mean'
lf |> Series.windowInto 4 Stats.mean
// Get first value in each window
lf |> Series.windowInto 4 Series.firstValue

The functions above create windows of size 4 that move from the left to right. Given input [1,2,3,4,5,6], this produces the following three windows: [1,2,3,4], [2,3,4,5] and [3,4,5,6].

Performance note: Series.windowInto materialises each window as a full series before calling the aggregation function. This gives O(n × window) time and allocation. When computing rolling statistics (mean, standard deviation, variance, etc.), prefer the dedicated Stats.moving* functions (e.g. Stats.movingMean, Stats.movingStd), which use an online algorithm and run in O(n) time:

// Fast – O(n) online algorithm
lf |> Stats.movingMean 4

// Slow – O(n × window), allocates a series per step
lf |> Series.windowInto 4 Stats.mean

See the Statistics documentation for the full list of Stats.moving* and Stats.expanding* functions.

What if we want to avoid creating <missing> values? One approach is to specify that we want to generate windows of smaller sizes at the beginning or at the end. This way, we get incomplete windows at the boundary:

let lfm2 = 
  // Create sliding windows with incomplete windows at the beginning
  lf |> Series.windowSizeInto (4, Boundary.AtBeginning) (fun ds ->
    Stats.mean ds.Data)

Frame.ofColumns [ "Orig" => lf; "Means" => lfm2 ]

In the previous sample, the code that performs aggregation is a lambda that takes ds, which is of type DataSegment<T>. This type informs us whether the window is complete or not:

// Simple series with characters
let st = Series.ofValues [ 'a' .. 'e' ]
st |> Series.windowSizeInto (3, Boundary.AtEnding) (function
  | DataSegment.Complete(ser) -> 
      // Return complete windows as uppercase strings
      String(ser |> Series.values |> Array.ofSeq).ToUpper()
  | DataSegment.Incomplete(ser) -> 
      // Return incomplete windows as padded lowercase strings
      String(ser |> Series.values |> Array.ofSeq).PadRight(3, '-') )  

Window size conditions

There are two other options for specifying when a window ends.

// Generate prices for each hour over 30 days
let hourly = stock1 (TimeSpan(1, 0, 0)) (30*24) |> series

// Generate windows of size 1 day
hourly |> Series.windowDist (TimeSpan(24, 0, 0))

// Generate windows such that date in each window is the same
hourly |> Series.windowWhile (fun d1 d2 -> d1.Date = d2.Date)

Chunking series

Chunking is similar to windowing, but it creates non-overlapping chunks, rather than (overlapping) sliding windows:

// Generate per-second observations over 10 minutes
let hf = stock1 (TimeSpan(0, 0, 1)) 600 |> series

// Create 10 second chunks with (possible) incomplete chunk at the end
hf |> Series.chunkSize (10, Boundary.AtEnding) 

// Create 10 second chunks and get the first observation for each (downsample)
hf |> Series.chunkDistInto (TimeSpan(0, 0, 10)) Series.firstValue

// Create chunks where hh:mm component is the same
hf |> Series.chunkWhile (fun k1 k2 -> 
  (k1.Hour, k1.Minute) = (k2.Hour, k2.Minute))

Pairwise

A special form of windowing is building a series of pairs containing a current and previous value from the input series:

// Create a series of pairs from earlier 'hf' input
hf |> Series.pairwise 

// Calculate differences between the current and previous values
hf |> Series.pairwiseWith (fun k (v1, v2) -> v2 - v1)

Sampling and resampling time series

Lookup

Given a series hf, you can get a value at a specified key using hf.Get(key). It is also possible to find values for larger number of keys at once using Series.lookupAll when you want more flexible lookup:

// Generate a bit less than 24 hours of data with 13.7sec offsets
let mf = stock1 (TimeSpan.FromSeconds(13.7)) 6300 |> series
// Generate keys for all minutes in 24 hours
let keys = [ for m in 0.0 .. 24.0*60.0-1.0 -> today.AddMinutes(m) ]

// Find value for a given key, or nearest greater key with value
mf |> Series.lookupAll keys Lookup.ExactOrGreater

// Find value for nearest smaller key
mf |> Series.lookupAll keys Lookup.ExactOrSmaller

Resampling

// For each key, collect values for greater keys until the next one
mf |> Series.resample keys Direction.Forward

// Aggregate each chunk of preceding values using mean
mf |> Series.resampleInto keys Direction.Backward 
  (fun k s -> Stats.mean s)

The second kind of resampling is based on a projection from existing keys in the series. The typical scenario is when you have time series with date time information and want to get information for each day:

// Generate 2.5 months of data in 1.7 hour offsets
let ds = stock1 (TimeSpan.FromHours(1.7)) 1000 |> series

// Sample by day
ds |> Series.resampleEquiv (fun d -> d.Date)
ds.ResampleEquivalence(fun d -> d.Date)

Uniform resampling

If you want to create sampling that assigns value to each key in the range specified by the input sequence (including days with no observations), then you can use uniform resampling:

// Create input data with non-uniformly distributed keys
let days =
  [ "10/3/2013 12:00:00"; "10/4/2013 15:00:00" 
    "10/4/2013 18:00:00"; "10/4/2013 19:00:00"
    "10/6/2013 15:00:00"; "10/6/2013 21:00:00" ]
let nu = 
  stock1 (TimeSpan(24,0,0)) 10 |> series
  |> Series.indexWith days |> Series.mapKeys DateTimeOffset.Parse

// Generate uniform resampling based on dates, fill missing with nearest smaller
let sampled =
  nu |> Series.resampleUniform Lookup.ExactOrSmaller 
    (fun dt -> dt.Date) (fun dt -> dt.AddDays(1.0))

// Turn into frame with multiple columns for each day
sampled 
|> Series.mapValues Series.indexOrdinally
|> Frame.ofRows

Sampling time series

// Generate 1k observations with 1.7 hour offsets
let pr = stock1 (TimeSpan.FromHours(1.7)) 1000 |> series

// Sample at 2 hour intervals; 'Backward' specifies that we collect all previous values
pr |> Series.sampleTime (TimeSpan(2, 0, 0)) Direction.Backward

// Get the most recent value, sampled at 2 hour intervals
pr |> Series.sampleTimeInto
  (TimeSpan(2, 0, 0)) Direction.Backward Series.lastValue

Calculations and statistics

Shifting and differences

// Generate sample data with 1.7 hour offsets
let sample = stock1 (TimeSpan.FromHours(1.7)) 6 |> series

// Calculates: new[i] = s[i] - s[i-1]
let diff1 = sample |> Series.diff 1

// Shift series values by 1
let shift1 = sample |> Series.shift 1

// Align all results in a frame to see the results
let alignedDf = 
  [ "Shift +1" => shift1 
    "Diff +1" => diff1 
    "Diff" => sample - shift1 
    "Orig" => sample ] |> Frame.ofColumns 

Operators and functions

Time series supports a large number of standard F# functions such as log and abs. You can also use standard numerical operators to apply some operation to all elements of the series, and binary operators automatically align two series before applying:

// Subtract previous value from the current value
sample - sample.Shift(1)
// Calculate logarithm of such differences
log (sample - sample.Shift(1))
// Calculate square of differences
sample.Diff(1) ** 2.0
// Get absolute value of differences
abs (sample - sample.Shift(1))
// Get absolute value of distance from the mean
abs (sample - (Stats.mean sample))

// Apply a custom function to all elements
let adjust v = min 1.0 (max -1.0 v)
adjust $ sample.Diff(1)

Data frame operations

Many of the time series operations can be applied to entire data frames as well:

// Multiply all numeric columns by a given constant
alignedDf * 0.65

// Sum each column and divide results by a constant
Stats.sum alignedDf / 6.0
// Divide sum by mean of each frame column
Stats.sum alignedDf / Stats.mean alignedDf
namespace System
namespace Deedle
namespace MathNet
namespace MathNet.Numerics
namespace MathNet.Numerics.Distributions
val randomPrice: seed: int -> drift: float -> volatility: float -> initial: float -> start: DateTimeOffset -> span: TimeSpan -> count: int -> (DateTimeOffset * float) seq
 Generates price using geometric Brownian motion
  - 'seed' specifies the seed for random number generator
  - 'drift' and 'volatility' set properties of the price movement
  - 'initial' and 'start' specify the initial price and date
  - 'span' specifies time span between individual observations
  - 'count' is the number of required values to generate
val seed: int
val drift: float
val volatility: float
val initial: float
val start: DateTimeOffset
val span: TimeSpan
val count: int
val dist: Normal
Multiple items
type Normal = interface IContinuousDistribution interface IUnivariateDistribution interface IDistribution new: unit -> unit + 3 overloads member CumulativeDistribution: x: float -> float member Density: x: float -> float member DensityLn: x: float -> float member InverseCumulativeDistribution: p: float -> float member Sample: unit -> float + 2 overloads member Samples: values: float array -> unit + 5 overloads ...
<summary> Continuous Univariate Normal distribution, also known as Gaussian distribution. For details about this distribution, see <a href="http://en.wikipedia.org/wiki/Normal_distribution">Wikipedia - Normal distribution</a>. </summary>

--------------------
Normal() : Normal
Normal(randomSource: Random) : Normal
Normal(mean: float, stddev: float) : Normal
Normal(mean: float, stddev: float, randomSource: Random) : Normal
Multiple items
type Random = new: unit -> unit + 1 overload member GetItems<'T> : choices: ReadOnlySpan<'T> * length: int -> 'T array + 2 overloads member Next: unit -> int + 2 overloads member NextBytes: buffer: byte array -> unit + 1 overload member NextDouble: unit -> float member NextInt64: unit -> int64 + 2 overloads member NextSingle: unit -> float32 member Shuffle<'T> : values: Span<'T> -> unit + 1 overload static member Shared: Random
<summary>Represents a pseudo-random number generator, which is an algorithm that produces a sequence of numbers that meet certain statistical requirements for randomness.</summary>

--------------------
Random() : Random
Random(Seed: int) : Random
val dt: float
Multiple items
[<Struct>] type TimeSpan = new: hours: int * minutes: int * seconds: int -> unit + 4 overloads member Add: ts: TimeSpan -> TimeSpan member CompareTo: value: obj -> int + 1 overload member Divide: divisor: float -> TimeSpan + 1 overload member Duration: unit -> TimeSpan member Equals: value: obj -> bool + 2 overloads member GetHashCode: unit -> int member Multiply: factor: float -> TimeSpan member Negate: unit -> TimeSpan member Subtract: ts: TimeSpan -> TimeSpan ...
<summary>Represents a time interval.</summary>

--------------------
TimeSpan ()
TimeSpan(ticks: int64) : TimeSpan
TimeSpan(hours: int, minutes: int, seconds: int) : TimeSpan
TimeSpan(days: int, hours: int, minutes: int, seconds: int) : TimeSpan
TimeSpan(days: int, hours: int, minutes: int, seconds: int, milliseconds: int) : TimeSpan
TimeSpan(days: int, hours: int, minutes: int, seconds: int, milliseconds: int, microseconds: int) : TimeSpan
val driftExp: float
val pown: x: 'T -> n: int -> 'T (requires member One and member ( * ) and member (/))
val randExp: float
val sqrt: value: 'T -> 'U (requires member Sqrt)
Multiple items
[<Struct>] type DateTimeOffset = new: date: DateOnly * time: TimeOnly * offset: TimeSpan -> unit + 8 overloads member Add: timeSpan: TimeSpan -> DateTimeOffset member AddDays: days: float -> DateTimeOffset member AddHours: hours: float -> DateTimeOffset member AddMicroseconds: microseconds: float -> DateTimeOffset member AddMilliseconds: milliseconds: float -> DateTimeOffset member AddMinutes: minutes: float -> DateTimeOffset member AddMonths: months: int -> DateTimeOffset member AddSeconds: seconds: float -> DateTimeOffset member AddTicks: ticks: int64 -> DateTimeOffset ...
<summary>Represents a point in time, typically expressed as a date and time of day, relative to Coordinated Universal Time (UTC).</summary>

--------------------
DateTimeOffset ()
DateTimeOffset(dateTime: DateTime) : DateTimeOffset
DateTimeOffset(dateTime: DateTime, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(ticks: int64, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(date: DateOnly, time: TimeOnly, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(year: int, month: int, day: int, hour: int, minute: int, second: int, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(year: int, month: int, day: int, hour: int, minute: int, second: int, millisecond: int, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(year: int, month: int, day: int, hour: int, minute: int, second: int, millisecond: int, calendar: Globalization.Calendar, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(year: int, month: int, day: int, hour: int, minute: int, second: int, millisecond: int, microsecond: int, offset: TimeSpan) : DateTimeOffset
DateTimeOffset(year: int, month: int, day: int, hour: int, minute: int, second: int, millisecond: int, microsecond: int, calendar: Globalization.Calendar, offset: TimeSpan) : DateTimeOffset
module Seq from Microsoft.FSharp.Collections
val unfold: generator: ('State -> ('T * 'State) option) -> state: 'State -> 'T seq
val dt: DateTimeOffset
val price: float
val exp: value: 'T -> 'T (requires member Exp)
Normal.Sample() : float
union case Option.Some: Value: 'T -> Option<'T>
val take: count: int -> source: 'T seq -> 'T seq
val today: DateTimeOffset
Multiple items
[<Struct>] type DateTime = new: date: DateOnly * time: TimeOnly -> unit + 16 overloads member Add: value: TimeSpan -> DateTime member AddDays: value: float -> DateTime member AddHours: value: float -> DateTime member AddMicroseconds: value: float -> DateTime member AddMilliseconds: value: float -> DateTime member AddMinutes: value: float -> DateTime member AddMonths: months: int -> DateTime member AddSeconds: value: float -> DateTime member AddTicks: value: int64 -> DateTime ...
<summary>Represents an instant in time, typically expressed as a date and time of day.</summary>

--------------------
DateTime ()
   (+0 other overloads)
DateTime(ticks: int64) : DateTime
   (+0 other overloads)
DateTime(date: DateOnly, time: TimeOnly) : DateTime
   (+0 other overloads)
DateTime(ticks: int64, kind: DateTimeKind) : DateTime
   (+0 other overloads)
DateTime(date: DateOnly, time: TimeOnly, kind: DateTimeKind) : DateTime
   (+0 other overloads)
DateTime(year: int, month: int, day: int) : DateTime
   (+0 other overloads)
DateTime(year: int, month: int, day: int, calendar: Globalization.Calendar) : DateTime
   (+0 other overloads)
DateTime(year: int, month: int, day: int, hour: int, minute: int, second: int) : DateTime
   (+0 other overloads)
DateTime(year: int, month: int, day: int, hour: int, minute: int, second: int, kind: DateTimeKind) : DateTime
   (+0 other overloads)
DateTime(year: int, month: int, day: int, hour: int, minute: int, second: int, calendar: Globalization.Calendar) : DateTime
   (+0 other overloads)
property DateTime.Today: DateTime with get
<summary>Gets the current date.</summary>
<returns>An object that is set to today's date, with the time component set to 00:00:00.</returns>
val stock1: (TimeSpan -> int -> (DateTimeOffset * float) seq)
val stock2: (TimeSpan -> int -> (DateTimeOffset * float) seq)
val s1: Series<DateTimeOffset,float>
val series: observations: ('a * 'b) seq -> Series<'a,'b> (requires equality)
<summary> Create a series from a sequence of key-value pairs that represent the observations of the series. This function can be used together with the `=&gt;` operator to create key-value pairs. </summary>
<example> // Creates a series with squares of numbers let sqs = series [ 1 =&gt; 1.0; 2 =&gt; 4.0; 3 =&gt; 9.0 ] </example>
val s2: Series<DateTimeOffset,float>
val s3: Series<DateTimeOffset,float>
member Series.Zip: otherSeries: Series<'K,'V2> -> Series<'K,('V opt * 'V2 opt)>
member Series.Zip: otherSeries: Series<'K,'V2> * kind: JoinKind -> Series<'K,('V opt * 'V2 opt)>
member Series.Zip: otherSeries: Series<'K,'V2> * kind: JoinKind * lookup: Lookup -> Series<'K,('V opt * 'V2 opt)>
[<Struct>] type JoinKind = | Outer = 0 | Inner = 1 | Left = 2 | Right = 3
<summary> This enumeration specifies joining behavior for `Join` method provided by `Series` and `Frame`. Outer join unions the keys (and may introduce missing values), inner join takes the intersection of keys; left and right joins take the keys of the first or the second series/frame. </summary>
<category>Parameters and results of various operations</category>
JoinKind.Left: JoinKind = 2
<summary> Take the keys of the left (first) structure and align values from the right (second) structure with the keys of the first one. Values for keys not available in the second structure will be missing. </summary>
JoinKind.Right: JoinKind = 3
<summary> Take the keys of the right (second) structure and align values from the left (first) structure with the keys of the second one. Values for keys not available in the first structure will be missing. </summary>
[<Struct>] type Lookup = | Exact = 1 | ExactOrGreater = 3 | ExactOrSmaller = 5 | Greater = 2 | Smaller = 4
<summary> Represents different behaviors of key lookup in series. For unordered series, the only available option is `Lookup.Exact` which finds the exact key - methods fail or return missing value if the key is not available in the index. For ordered series `Lookup.Greater` finds the first greater key (e.g. later date) with a value. `Lookup.Smaller` searches for the first smaller key. The options `Lookup.ExactOrGreater` and `Lookup.ExactOrSmaller` finds the exact key (if it is present) and otherwise search for the nearest larger or smaller key, respectively. </summary>
<category>Parameters and results of various operations</category>
Lookup.ExactOrSmaller: Lookup = 5
<summary> Lookup a value associated with the specified key or with the nearest smaller key that has a value available. Fails (or returns missing value) only when the specified key is smaller than all available keys. </summary>
val f1: Frame<DateTimeOffset,string>
Multiple items
module Frame from Deedle
<summary> The `Frame` module provides an F#-friendly API for working with data frames. The module follows the usual desing for collection-processing in F#, so the functions work well with the pipelining operator (`|&gt;`). For example, given a frame with two columns representing prices, we can use `Frame.pctChange` to calculate daily returns like this: let df = frame [ "MSFT" =&gt; prices1; "AAPL" =&gt; prices2 ] let rets = df |&gt; Frame.pctChange 1 rets |&gt; Stats.mean Note that the `Stats.mean` operation is overloaded and works both on series (returning a number) and on frames (returning a series). You can also use `Frame.diff` if you need absolute differences rather than relative changes. The functions in this module are designed to be used from F#. For a C#-friendly API, see the `FrameExtensions` type. For working with individual series, see the `Series` module. The functions in the `Frame` module are grouped in a number of categories and documented below. Accessing frame data and lookup ------------------------------- Functions in this category provide access to the values in the fame. You can also add and remove columns from a frame (which both return a new value). - `addCol`, `replaceCol` and `dropCol` can be used to create a new data frame with a new column, by replacing an existing column with a new one, or by dropping an existing column - `cols` and `rows` return the columns or rows of a frame as a series containing objects; `getCols` and `getRows` return a generic series and cast the values to the type inferred from the context (columns or rows of incompatible types are skipped); `getNumericCols` returns columns of a type convertible to `float` for convenience. - You can get a specific row or column using `get[Col|Row]` or `lookup[Col|Row]` functions. The `lookup` variant lets you specify lookup behavior for key matching (e.g. find the nearest smaller key than the specified value). There are also `[try]get` and `[try]Lookup` functions that return optional values and functions returning entire observations (key together with the series). - `sliceCols` and `sliceRows` return a sub-frame containing only the specified columns or rows. Finally, `toArray2D` returns the frame data as a 2D array. Grouping, windowing and chunking -------------------------------- The basic grouping functions in this category can be used to group the rows of a data frame by a specified projection or column to create a frame with hierarchical index such as <c>Frame&lt;'K1 * 'K2, 'C&gt;</c>. The functions always aggregate rows, so if you want to group columns, you need to use `Frame.transpose` first. The function `groupRowsBy` groups rows by the value of a specified column. Use `groupRowsBy[Int|Float|String...]` if you want to specify the type of the column in an easier way than using type inference; `groupRowsUsing` groups rows using the specified _projection function_ and `groupRowsByIndex` projects the grouping key just from the row index. More advanced functions include: `aggregateRowsBy` which groups the rows by a specified sequence of columns and aggregates each group into a single value; `pivotTable` implements the pivoting operation [as documented in the tutorials](../frame.html#pivot). The `melt` and `unmelt` functions turn the data frame into a single data frame containing columns `Row`, `Column` and `Value` containing the data of the original frame; `unmelt` can be used to turn this representation back into an original frame. The `stack` and `unstack` functions implement pandas-style reshape operations. `stack` converts `Frame&lt;'R,'C&gt;` to a long-format `Frame&lt;'R*'C, string&gt;` where each cell becomes a row keyed by `(rowKey, colKey)` with a single `"Value"` column. `unstack` promotes the inner row-key level to column keys, producing `Frame&lt;'R1, 'C*'R2&gt;` from `Frame&lt;'R1*'R2,'C&gt;`. A simple windowing functions that are exposed for an entire frame operations are `window` and `windowInto`. For more complex windowing operations, you currently have to use `mapRows` or `mapCols` and apply windowing on individual series. Sorting and index manipulation ------------------------------ A frame is indexed by row keys and column keys. Both of these indices can be sorted (by the keys). A frame that is sorted allows a number of additional operations (such as lookup using the `Lookp.ExactOrSmaller` lookup behavior). The functions in this category provide ways for manipulating the indices. It is expected that most operations are done on rows and so more functions are available in a row-wise way. A frame can alwyas be transposed using `Frame.transpose`. Index operations: The existing row/column keys can be replaced by a sequence of new keys using the `indexColsWith` and `indexRowsWith` functions. Row keys can also be replaced by ordinal numbers using `indexRowsOrdinally`. The function `indexRows` uses the specified column of the original frame as the index. It removes the column from the resulting frame (to avoid this, use overloaded `IndexRows` method). This function infers the type of row keys from the context, so it is usually more convenient to use `indexRows[Date|String|Int|...]` functions. Finally, if you want to calculate the index value based on multiple columns of the row, you can use `indexRowsUsing`. Sorting frame rows: Frame rows can be sorted according to the value of a specified column using the `sortRows` function; `sortRowsBy` takes a projection function which lets you transform the value of a column (e.g. to project a part of the value). The functions `sortRowsByKey` and `sortColsByKey` sort the rows or columns using the default ordering on the key values. The result is a frame with ordered index. Expanding columns: When the frame contains a series with complex .NET objects such as F# records or C# classes, it can be useful to "expand" the column. This operation looks at the type of the objects, gets all properties of the objects (recursively) and generates multiple series representing the properties as columns. The function `expandCols` expands the specified columns while `expandAllCols` applies the expansion to all columns of the data frame. Frame transformations --------------------- Functions in this category perform standard transformations on data frames including projections, filtering, taking some sub-frame of the frame, aggregating values using scanning and so on. Projection and filtering functions such as `[map|filter][Cols|Rows]` call the specified function with the column or row key and an <c>ObjectSeries&lt;'K&gt;</c> representing the column or row. You can use functions ending with `Values` (such as `mapRowValues`) when you do not require the row key, but only the row series; `mapRowKeys` and `mapColKeys` can be used to transform the keys. You can use `reduceValues` to apply a custom reduction to values of columns. Other aggregations are available in the `Stats` module. You can also get a row with the greaterst or smallest value of a given column using `[min|max]RowBy`. The functions `take[Last]` and `skip[Last]` can be used to take a sub-frame of the original source frame by skipping a specified number of rows. Note that this does not require an ordered frame and it ignores the index - for index-based lookup use slicing, such as `df.Rows.[lo .. hi]`, instead. Finally the `shift` function can be used to obtain a frame with values shifted by the specified offset. This can be used e.g. to get previous value for each key using `Frame.shift 1 df`. The `diff` function calculates difference from previous value using `df - (Frame.shift offs df)`. Processing frames with exceptions --------------------------------- The functions in this group can be used to write computations over frames that may fail. They use the type <c>tryval&lt;'T&gt;</c> which is defined as a discriminated union with two cases: Success containing a value, or Error containing an exception. Using <c>tryval&lt;'T&gt;</c> as a value in a data frame is not generally recommended, because the type of values cannot be tracked in the type. For this reason, it is better to use <c>tryval&lt;'T&gt;</c> with individual series. However, `tryValues` and `fillErrorsWith` functions can be used to get values, or fill failed values inside an entire data frame. The `tryMapRows` function is more useful. It can be used to write a transformation that applies a computation (which may fail) to each row of a data frame. The resulting series is of type <c>Series&lt;'R, tryval&lt;'T&gt;&gt;</c> and can be processed using the <c>Series</c> module functions. Missing values -------------- This group of functions provides a way of working with missing values in a data frame. The category provides the following functions that can be used to fill missing values: * `fillMissingWith` fills missing values with a specified constant * `fillMissingUsing` calls a specified function for every missing value * `fillMissing` and variants propagates values from previous/later keys We use the terms _sparse_ and _dense_ to denote series that contain some missing values or do not contain any missing values, respectively. The functions `denseCols` and `denseRows` return a series that contains only dense columns or rows and all sparse rows or columns are replaced with a missing value. The `dropSparseCols` and `dropSparseRows` functions drop these missing values and return a frame with no missing values. Joining, merging and zipping ---------------------------- The simplest way to join two frames is to use the `join` operation which can be used to perform left, right, outer or inner join of two frames. When the row keys of the frames do not match exactly, you can use `joinAlign` which takes an additional parameter that specifies how to find matching key in left/right join (e.g. by taking the nearest smaller available key). Frames that do not contian overlapping values can be combined using `merge` (when combining just two frames) or using `mergeAll` (for larger number of frames). Tha latter is optimized to work well for a large number of data frames. Finally, frames with overlapping values can be combined using `zip`. It takes a function that is used to combine the overlapping values. A `zipAlign` function provides a variant with more flexible row key matching (as in `joinAlign`) Hierarchical index operations ----------------------------- A data frame has a hierarchical row index if the row index is formed by a tuple, such as <c>Frame&lt;'R1 * 'R2, 'C&gt;</c>. Frames of this kind are returned, for example, by the grouping functions such as <c>Frame.groupRowsBy</c>. The functions in this category provide ways for working with data frames that have hierarchical row keys. The functions <c>applyLevel</c> and <c>reduceLevel</c> can be used to reduce values according to one of the levels. The <c>applyLevel</c> function takes a reduction of type <c>Series&lt;'K, 'T&gt; -&gt; 'T</c> while <c>reduceLevel</c> reduces individual values using a function of type <c>'T -&gt; 'T -&gt; 'T</c>. The functions <c>nest</c> and <c>unnest</c> can be used to convert between frames with hierarchical indices (<c>Frame&lt;'K1 * 'K2, 'C&gt;</c>) and series of frames that represent individual groups (<c>Series&lt;'K1, Frame&lt;'K2, 'C&gt;&gt;</c>). The <c>nestBy</c> function can be used to perform group by operation and return the result as a series of frems. </summary>
<category>Frame and series operations</category>


--------------------
type Frame = static member ReadCsv: location: string * [<Optional>] hasHeaders: Nullable<bool> * [<Optional>] inferTypes: Nullable<bool> * [<Optional>] inferRows: Nullable<int> * [<Optional>] schema: string * [<Optional>] separators: string * [<Optional>] culture: string * [<Optional>] maxRows: Nullable<int> * [<Optional>] missingValues: string array * [<Optional>] preferOptions: bool * [<Optional>] encoding: Encoding -> Frame<int,string> + 1 overload static member ReadReader: reader: IDataReader -> Frame<int,string> static member CustomExpanders: Dictionary<Type,Func<obj,(string * Type * obj) seq>> with get static member NonExpandableInterfaces: ResizeArray<Type> with get static member NonExpandableTypes: HashSet<Type> with get
<summary> Provides static methods for creating frames, reading frame data from CSV files and database (via IDataReader). The type also provides global configuration for reflection-based expansion. </summary>
<category>Frame and series operations</category>


--------------------
type Frame<'TRowKey,'TColumnKey (requires equality and equality)> = interface IDynamicMetaObjectProvider interface INotifyCollectionChanged interface IFrameFormattable interface IFsiFormattable interface IFrame new: rowIndex: IIndex<'TRowKey> * columnIndex: IIndex<'TColumnKey> * data: IVector<IVector> * indexBuilder: IIndexBuilder * vectorBuilder: IVectorBuilder -> Frame<'TRowKey,'TColumnKey> + 1 overload member AddColumn: column: 'TColumnKey * series: 'V seq -> unit + 3 overloads member AggregateRowsBy: groupBy: 'TColumnKey seq * aggBy: 'TColumnKey seq * aggFunc: Func<Series<'TRowKey,'a>,'b> -> Frame<int,'TColumnKey> member Clone: unit -> Frame<'TRowKey,'TColumnKey> member ColumnApply: f: Func<Series<'TRowKey,'T>,ISeries<'TRowKey>> -> Frame<'TRowKey,'TColumnKey> + 1 overload ...
<summary> A frame is the key Deedle data structure (together with series). It represents a data table (think spreadsheet or CSV file) with multiple rows and columns. The frame consists of row index, column index and data. The indices are used for efficient lookup when accessing data by the row key `'TRowKey` or by the column key `'TColumnKey`. Deedle frames are optimized for the scenario when all values in a given column are of the same type (but types of different columns can differ). </summary>
<remarks><para>Joining, zipping and appending:</para><para> More info </para></remarks>
<category>Core frame and series types</category>


--------------------
new: names: 'TColumnKey seq * columns: ISeries<'TRowKey> seq -> Frame<'TRowKey,'TColumnKey>
new: rowIndex: Indices.IIndex<'TRowKey> * columnIndex: Indices.IIndex<'TColumnKey> * data: IVector<IVector> * indexBuilder: Indices.IIndexBuilder * vectorBuilder: Vectors.IVectorBuilder -> Frame<'TRowKey,'TColumnKey>
static member Frame.ofColumns: cols: Series<'C,#ISeries<'R>> -> Frame<'R,'C> (requires equality and equality)
static member Frame.ofColumns: cols: ('C * #ISeries<'R>) seq -> Frame<'R,'C> (requires equality and equality)
val f2: Frame<DateTimeOffset,string>
val f3: Frame<DateTimeOffset,string>
member Frame.Join: otherFrame: Frame<'TRowKey,'TColumnKey> -> Frame<'TRowKey,'TColumnKey>
member Frame.Join: colKey: 'TColumnKey * series: Series<'TRowKey,'V> -> Frame<'TRowKey,'TColumnKey>
member Frame.Join: otherFrame: Frame<'TRowKey,'TColumnKey> * kind: JoinKind -> Frame<'TRowKey,'TColumnKey>
member Frame.Join: colKey: 'TColumnKey * series: Series<'TRowKey,'V> * kind: JoinKind -> Frame<'TRowKey,'TColumnKey>
member Frame.Join: otherFrame: Frame<'TRowKey,'TColumnKey> * kind: JoinKind * lookup: Lookup -> Frame<'TRowKey,'TColumnKey>
member Frame.Join: colKey: 'TColumnKey * series: Series<'TRowKey,'V> * kind: JoinKind * lookup: Lookup -> Frame<'TRowKey,'TColumnKey>
JoinKind.Outer: JoinKind = 0
<summary> Combine the keys available in both structures, align the values that are available in both of them and mark the remaining values as missing. </summary>
JoinKind.Inner: JoinKind = 1
<summary> Take the intersection of the keys available in both structures and align the values of the two structures. The resulting structure cannot contain missing values. </summary>
val join: kind: JoinKind -> frame1: Frame<'R,'C> -> frame2: Frame<'R,'C> -> Frame<'R,'C> (requires equality and equality)
<summary> Join two data frames. The columns of the joined frames must not overlap and their rows are aligned and transformed according to the specified join kind. For more alignment options on ordered frames, see `joinAlign`. </summary>
<param name="frame1">First data frame (left) to be used in the joining</param>
<param name="frame2">Other frame (right) to be joined with `frame1`</param>
<param name="kind">Specifies the joining behavior on row indices. Use `JoinKind.Outer` and `JoinKind.Inner` to get the union and intersection of the row keys, respectively. Use `JoinKind.Left` and `JoinKind.Right` to use the current key of the left/right data frame.</param>
<category>Joining, merging and zipping</category>
val joinAlign: kind: JoinKind -> lookup: Lookup -> frame1: Frame<'R,'C> -> frame2: Frame<'R,'C> -> Frame<'R,'C> (requires equality and equality)
<summary> Join two data frames. The columns of the joined frames must not overlap and their rows are aligned and transformed according to the specified join kind. When the index of both frames is ordered, it is possible to specify `lookup` in order to align indices from other frame to the indices of the main frame (typically, to find the nearest key with available value for a key). </summary>
<param name="frame1">First data frame (left) to be used in the joining</param>
<param name="frame2">Other frame (right) to be joined with `frame1`</param>
<param name="kind">Specifies the joining behavior on row indices. Use `JoinKind.Outer` and `JoinKind.Inner` to get the union and intersection of the row keys, respectively. Use `JoinKind.Left` and `JoinKind.Right` to use the current key of the left/right data frame.</param>
<param name="lookup">When `kind` is `Left` or `Right` and the two frames have ordered row index, this parameter can be used to specify how to find value for a key when there is no exactly matching key or when there are missing values.</param>
<category>Joining, merging and zipping</category>
val lf: Series<DateTimeOffset,float>
Multiple items
module Series from Deedle
<summary> The `Series` module provides an F#-friendly API for working with data and time series. The API follows the usual design for collection-processing in F#, so the functions work well with the pipelining (<c>|&gt;</c>) operator. For example, given a series with ages, we can use `Series.filterValues` to filter outliers and then `Stats.mean` to calculate the mean: ages |&gt; Series.filterValues (fun v -&gt; v &gt; 0.0 &amp;&amp; v &lt; 120.0) |&gt; Stats.mean The module provides comprehensive set of functions for working with series. The same API is also exposed using C#-friendly extension methods. In C#, the above snippet could be written as: [lang=csharp] ages .Where(kvp =&gt; kvp.Value &gt; 0.0 &amp;&amp; kvp.Value &lt; 120.0) .Mean() For more information about similar frame-manipulation functions, see the `Frame` module. For more information about C#-friendly extensions, see `SeriesExtensions`. The functions in the `Series` module are grouped in a number of categories and documented below. Accessing series data and lookup -------------------------------- Functions in this category provide access to the values in the series. - The term _observation_ is used for a key value pair in the series. - When working with a sorted series, it is possible to perform lookup using keys that are not present in the series - you can specify to search for the previous or next available value using _lookup behavior_. - Functions such as `get` and `getAll` have their counterparts `lookup` and `lookupAll` that let you specify lookup behavior. - For most of the functions that may fail, there is a `try[Foo]` variant that returns `None` instead of failing. - Functions with a name ending with `At` perform lookup based on the absolute integer offset (and ignore the keys of the series) Series transformations ---------------------- Functions in this category perform standard transformations on series including projections, filtering, taking some sub-series of the series, aggregating values using scanning and so on. Projection and filtering functions generally skip over missing values, but there are variants `filterAll` and `mapAll` that let you handle missing values explicitly. Keys can be transformed using `mapKeys`. When you do not need to consider the keys, and only care about values, use `filterValues` and `mapValues` (which is also aliased as the `$` operator). Series supports standard set of folding functions including `reduce` and `fold` (to reduce series values into a single value) as well as the `scan[All]` function, which can be used to fold values of a series into a series of intermeidate folding results. The functions `take[Last]` and `skip[Last]` can be used to take a sub-series of the original source series by skipping a specified number of elements. Note that this does not require an ordered series and it ignores the index - for index-based lookup use slicing, such as `series.[lo .. hi]`, instead. Finally the `shift` function can be used to obtain a series with values shifted by the specified offset. This can be used e.g. to get previous value for each key using `Series.shift 1 ts`. The `diff` function calculates difference from previous value using `ts - (Series.shift offs ts)`. Processing series with exceptions --------------------------------- The functions in this group can be used to write computations over series that may fail. They use the type <c>tryval&lt;'T&gt;</c> which is defined as a discriminated union with two cases: Success containing a value, or Error containing an exception. The function `tryMap` lets you create <c>Series&lt;'K, tryval&lt;'T&gt;&gt;</c> by mapping over values of an original series. You can then extract values using `tryValues`, which throws `AggregateException` if there were any errors. Functions `tryErrors` and `trySuccesses` give series containing only errors and successes. You can fill failed values with a constant using `fillErrorsWith`. Hierarchical index operations ----------------------------- When the key of a series is tuple, the elements of the tuple can be treated as multiple levels of a index. For example <c>Series&lt;'K1 * 'K2, 'V&gt;</c> has two levels with keys of types <c>'K1</c> and <c>'K2</c> respectively. The functions in this cateogry provide a way for aggregating values in the series at one of the levels. For example, given a series `input` indexed by two-element tuple, you can calculate mean for different first-level values as follows: input |&gt; applyLevel fst Stats.mean Note that the `Stats` module provides helpers for typical statistical operations, so the above could be written just as `input |&gt; Stats.levelMean fst`. Grouping, windowing and chunking -------------------------------- This category includes functions that group data from a series in some way. Two key concepts here are _window_ and _chunk_. Window refers to (overlapping) sliding windows over the input series while chunk refers to non-overlapping blocks of the series. The boundary behavior can be specified using the `Boundary` flags. The value `Skip` means that boundaries (incomplete windows or chunks) should be skipped. The value `AtBeginning` and `AtEnding` can be used to define at which side should the boundary be returned (or skipped). For chunking, `AtBeginning ||| Skip` makes sense and it means that the incomplete chunk at the beginning should be skipped (aligning the last chunk with the end). The behavior may be specified in a number of ways (which is reflected in the name): - `dist` - using an absolute distance between the keys - `while` - using a condition on the first and last key - `size` - by specifying the absolute size of the window/chunk The functions ending with `Into` take a function to be applied to the window/chunk. The functions `window`, `windowInto` and `chunk`, `chunkInto` are simplified versions that take a size. There is also `pairwise` function for sliding window of size two. Missing values -------------- This group of functions provides a way of working with missing values in a series. The `dropMissing` function drops all keys for which there are no values in the series. The `withMissingFrom` function lets you copy missing values from another series. The remaining functions provide different mechanism for filling the missing values. * `fillMissingWith` fills missing values with a specified constant * `fillMissingUsing` calls a specified function for every missing value * `fillMissing` and variants propagates values from previous/later keys Sorting and index manipulation ------------------------------ A series that is sorted by keys allows a number of additional operations (such as lookup using the `Lookp.ExactOrSmaller` lookup behavior). However, it is also possible to sort series based on the values - although the functions for manipulation with series do not guarantee that the order will be preserved. To sort series by keys, use `sortByKey`. Other sorting functions let you sort the series using a specified comparer function (`sortWith`), using a projection function (`sortBy`) and using the default comparison (`sort`). In addition, you can also replace the keys of a series with other keys using `indexWith` or with integers using `indexOrdinally`. To pick and reorder series values using to match a list of keys use `realign`. Sampling, resampling and advanced lookup ---------------------------------------- Given a (typically) time series sampling or resampling makes it possible to get time series with representative values at lower or uniform frequency. We use the following terminology: - `lookup` and `sample` functions find values at specified key; if a key is not available, they can look for value associated with the nearest smaller or the nearest greater key. - `resample` function aggregate values values into chunks based on a specified collection of keys (e.g. explicitly provided times), or based on some relation between keys (e.g. date times having the same date). - `resampleUniform` is similar to resampling, but we specify keys by providing functions that generate a uniform sequence of keys (e.g. days), the operation also fills value for days that have no corresponding observations in the input sequence. Joining, merging and zipping ---------------------------- Given two series, there are two ways to combine the values. If the keys in the series are not overlapping (or you want to throw away values from one or the other series), then you can use `merge` or `mergeUsing`. To merge more than 2 series efficiently, use the `mergeAll` function, which has been optimized for large number of series. If you want to align two series, you can use the _zipping_ operation. This aligns two series based on their keys and gives you tuples of values. The default behavior (`zip`) uses outer join and exact matching. For ordered series, you can specify other forms of key lookups (e.g. find the greatest smaller key) using `zipAlign`. functions ending with `Into` are generally easier to use as they call a specified function to turn the tuple (of possibly missing values) into a new value. For more complicated behaviors, it is often convenient to use joins on frames instead of working with series. Create two frames with single columns and then use the join operation. The result will be a frame with two columns (which is easier to use than series of tuples). </summary>
<category>Frame and series operations</category>


--------------------
type Series = static member ofNullables: values: Nullable<'a> seq -> Series<int,'a> (requires default constructor and value type and 'a :> ValueType) static member ofObservations: observations: ('a * 'b) seq -> Series<'a,'b> (requires equality) static member ofOptionalObservations: observations: ('K * 'a option) seq -> Series<'K,'a> (requires equality) static member ofValues: values: 'a seq -> Series<int,'a>

--------------------
type Series<'K,'V (requires equality)> = interface ISeriesFormattable interface IFsiFormattable interface ISeries<'K> new: index: IIndex<'K> * vector: IVector<'V> * vectorBuilder: IVectorBuilder * indexBuilder: IIndexBuilder -> Series<'K,'V> + 3 overloads member After: lowerExclusive: 'K -> Series<'K,'V> member Aggregate: aggregation: Aggregation<'K> * keySelector: Func<DataSegment<Series<'K,'V>>,'TNewKey> * valueSelector: Func<DataSegment<Series<'K,'V>>,OptionalValue<'R>> -> Series<'TNewKey,'R> (requires equality) + 1 overload member AsyncMaterialize: unit -> Async<Series<'K,'V>> member Before: upperExclusive: 'K -> Series<'K,'V> member Between: lowerInclusive: 'K * upperInclusive: 'K -> Series<'K,'V> member Compare: another: Series<'K,'V> -> Series<'K,Diff<'V>> ...
<summary> The type <c>Series&lt;K, V&gt;</c> represents a data series consisting of values `V` indexed by keys `K`. The keys of a series may or may not be ordered </summary>
<category>Core frame and series types</category>


--------------------
new: pairs: Collections.Generic.KeyValuePair<'K,'V> seq -> Series<'K,'V>
new: keys: 'K seq * values: 'V seq -> Series<'K,'V>
new: keys: 'K array * values: 'V array -> Series<'K,'V>
new: index: Indices.IIndex<'K> * vector: IVector<'V> * vectorBuilder: Vectors.IVectorBuilder * indexBuilder: Indices.IIndexBuilder -> Series<'K,'V>
val window: size: int -> series: Series<'K,'T> -> Series<'K,Series<'K,'T>> (requires equality)
<summary> Creates a sliding window using the specified size and returns the produced windows as a nested series. The key in the new series is the last key of the window. This function skips incomplete chunks - you can use `Series.windowSize` for more options. </summary>
<param name="size">The size of the sliding window.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val windowInto: size: int -> f: (Series<'K,'T> -> 'R) -> series: Series<'K,'T> -> Series<'K,'R> (requires equality)
<summary> Creates a sliding window using the specified size and then applies the provided value selector `f` on each window to produce the result which is returned as a new series. This function skips incomplete chunks - you can use `Series.windowSizeInto` for more options. </summary>
<param name="size">The size of the sliding window.</param>
<param name="series">The input series to be aggregated.</param>
<param name="f">A function that is called on each created window.</param>
<category>Grouping, windowing and chunking</category>
type Stats = static member corr: series1: Series<'K,'V1> -> series2: Series<'K,'V2> -> float (requires equality) static member corrFrame: frame: Frame<'R,'C> -> Frame<'C,'C> (requires equality and equality) static member count: series: Series<'K,'V> -> int (requires equality) + 1 overload static member cov: series1: Series<'K,'V1> -> series2: Series<'K,'V2> -> float (requires equality) static member covFrame: frame: Frame<'R,'C> -> Frame<'C,'C> (requires equality and equality) static member describe: series: Series<'K,'V> -> Series<string,float> (requires equality and equality) + 1 overload static member expandingCount: series: Series<'K,'V> -> Series<'K,float> (requires equality) static member expandingKurt: series: Series<'K,'V> -> Series<'K,float> (requires equality) static member expandingMax: series: Series<'K,'V> -> Series<'K,float> (requires equality) static member expandingMean: series: Series<'K,'V> -> Series<'K,float> (requires equality) ...
static member Stats.mean: frame: Frame<'R,'C> -> Series<'C,float> (requires equality and equality)
static member Stats.mean: series: Series<'K,'V> -> float (requires equality)
val firstValue: series: Series<'K,'V> -> 'V (requires equality)
<summary> Returns the first value of the series. This fails if the first value is missing. </summary>
<category>Accessing series data and lookup</category>
val lfm2: Series<DateTimeOffset,float>
val windowSizeInto: int * Boundary -> f: (DataSegment<Series<'K,'T>> -> 'R) -> series: Series<'K,'T> -> Series<'K,'R> (requires equality)
<summary> Creates a sliding window using the specified size and boundary behavior and then applies the provided value selector `f` on each window to produce the result which is returned as a new series. The key is the last key of the window, unless boundary behavior is `Boundary.AtEnding` (in which case it is the first key). </summary>
<param name="bounds">Specifies the window size and bounary behavior. The boundary behavior can be `Boundary.Skip` (meaning that no incomplete windows are produced), `Boundary.AtBeginning` (meaning that incomplete windows are produced at the beginning) or `Boundary.AtEnding` (to produce incomplete windows at the end of series)</param>
<param name="f">A value selector that is called to aggregate each window.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
[<Struct>] type Boundary = | AtBeginning = 1 | AtEnding = 2 | Skip = 4
<summary> Represents boundary behaviour for operations such as floating window. The type specifies whether incomplete windows (of smaller than required length) should be produced at the beginning (`AtBeginning`) or at the end (`AtEnding`) or skipped (`Skip`). For chunking, combinations are allowed too - to skip incomplete chunk at the beginning, use `Boundary.Skip ||| Boundary.AtBeginning`. </summary>
<category>Parameters and results of various operations</category>
Boundary.AtBeginning: Boundary = 1
val ds: DataSegment<Series<DateTimeOffset,float>>
property DataSegment.Data: Series<DateTimeOffset,float> with get
<summary> Returns the data associated with the segment (for boundary segment, this may be smaller than the required window size) </summary>
val st: Series<int,char>
static member Series.ofValues: values: 'a seq -> Series<int,'a>
Boundary.AtEnding: Boundary = 2
Multiple items
union case DataSegment.DataSegment: DataSegmentKind * 'T -> DataSegment<'T>

--------------------
module DataSegment from Deedle
<summary> Provides helper functions and active patterns for working with `DataSegment` values </summary>
<category>Parameters and results of various operations</category>


--------------------
type DataSegment<'T> = | DataSegment of DataSegmentKind * 'T override ToString: unit -> string member Data: 'T with get member Kind: DataSegmentKind with get
<summary> Represents a segment of a series or sequence. The value is returned from various functions that aggregate data into chunks or floating windows. The `Complete` case represents complete segment (e.g. of the specified size) and `Boundary` represents segment at the boundary (e.g. smaller than the required size). </summary>
<example> For example (using internal `windowed` function): <code> open Deedle.Internal Seq.windowedWithBounds 3 Boundary.AtBeginning [ 1; 2; 3; 4 ] // [| DataSegment(Incomplete, [| 1 |]) ] // DataSegment(Incomplete, [| 1; 2 |]) ] // DataSegment(Complete [| 1; 2; 3 |]) ] // DataSegment(Complete [| 2; 3; 4 |]) |] </code> If you do not need to distinguish the two cases, you can use the `Data` property to get the array representing the segment data. </example>
<category>Parameters and results of various operations</category>
active recognizer Complete: DataSegment<'a> -> Choice<'a,'a>
<summary> Complete active pattern that makes it possible to write functions that behave differently for complete and incomplete segments. For example, the following returns zero for incomplete segments: let sumSegmentOrZero = function | DataSegment.Complete(value) -&gt; Stats.sum value | DataSegment.Incomplete _ -&gt; 0.0 </summary>
val ser: Series<int,char>
Multiple items
type String = interface IEnumerable<char> interface IEnumerable interface ICloneable interface IComparable interface IComparable<string> interface IConvertible interface IEquatable<string> interface IParsable<string> interface ISpanParsable<string> new: value: nativeptr<char> -> unit + 8 overloads ...
<summary>Represents text as a sequence of UTF-16 code units.</summary>

--------------------
String(value: nativeptr<char>) : String
String(value: char array) : String
String(value: ReadOnlySpan<char>) : String
String(value: nativeptr<sbyte>) : String
String(c: char, count: int) : String
String(value: nativeptr<char>, startIndex: int, length: int) : String
String(value: char array, startIndex: int, length: int) : String
String(value: nativeptr<sbyte>, startIndex: int, length: int) : String
String(value: nativeptr<sbyte>, startIndex: int, length: int, enc: Text.Encoding) : String
val values: series: Series<'K,'T> -> 'T seq (requires equality)
<summary> Returns the (non-missing) values of the series as a sequence </summary>
<category>Accessing series data and lookup</category>
type Array = interface ICollection interface IEnumerable interface IList interface IStructuralComparable interface IStructuralEquatable interface ICloneable member Clone: unit -> obj member CopyTo: array: Array * index: int -> unit + 1 overload member GetEnumerator: unit -> IEnumerator member GetLength: dimension: int -> int ...
<summary>Provides methods for creating, manipulating, searching, and sorting arrays, thereby serving as the base class for all arrays in the common language runtime.</summary>
val ofSeq: source: 'T seq -> 'T array
active recognizer Incomplete: DataSegment<'a> -> Choice<'a,'a>
<summary> Complete active pattern that makes it possible to write functions that behave differently for complete and incomplete segments. For example, the following returns zero for incomplete segments: let sumSegmentOrZero = function | DataSegment.Complete(value) -&gt; Stats.sum value | DataSegment.Incomplete _ -&gt; 0.0 </summary>
val hourly: Series<DateTimeOffset,float>
val windowDist: distance: 'D -> series: Series<'K,'T> -> Series<'K,Series<'K,'T>> (requires comparison and equality and member (-))
<summary> Creates a sliding window based on distance between keys. A window is started at each input element and ends once the distance between the first and the last key is greater than the specified `distance`. The windows are then returned as a nested series. The key of each window is the key of the first element in the window. </summary>
<param name="distance">The maximal allowed distance between keys of a window. Note that this is an inline function - there must be `-` operator defined between `distance` and the keys of the series.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val windowWhile: cond: ('K -> 'K -> bool) -> series: Series<'K,'T> -> Series<'K,Series<'K,'T>> (requires equality)
<summary> Creates a sliding window based on a condition on keys. A window is started at each input element and ends once the specified `cond` function returns `false` when called on the first and the last key of the window. The windows are then returned as a nested series. The key of each window is the key of the first element in the window. </summary>
<param name="cond">A function that is called on the first and the last key of a window to determine when a window should end.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val d1: DateTimeOffset
val d2: DateTimeOffset
property DateTimeOffset.Date: DateTime with get
<summary>Gets a <see cref="T:System.DateTime" /> value that represents the date component of the current <see cref="T:System.DateTimeOffset" /> object.</summary>
<returns>A <see cref="T:System.DateTime" /> value that represents the date component of the current <see cref="T:System.DateTimeOffset" /> object.</returns>
val hf: Series<DateTimeOffset,float>
val chunkSize: int * Boundary -> series: Series<'K,'T> -> Series<'K,Series<'K,'T>> (requires equality)
<summary> Aggregates the input into a series of adacent chunks using the specified size and boundary behavior and returns the produced chunks as a nested series. The key is the first key of the chunk, unless boundary behavior has `Boundary.AtBeginning` flag (in which case it is the last key). </summary>
<param name="bounds">Specifies the chunk size and bounary behavior. The boundary behavior can be `Boundary.Skip` (meaning that no incomplete chunks are produced), `Boundary.AtBeginning` (meaning that incomplete chunks are produced at the beginning) or `Boundary.AtEnding` (to produce incomplete chunks at the end of series)</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val chunkDistInto: distance: 'D -> f: (Series<'K,'T> -> 'R) -> series: Series<'K,'T> -> Series<'K,'R> (requires comparison and equality and member (-))
<summary> Aggregates the input into a series of adacent chunks. A chunk is started once the distance between the first and the last key of a previous chunk is greater than the specified `distance`. Each chunk is then aggregated into a value using the specified function `f`. The key of each chunk is the key of the first element in the chunk. </summary>
<param name="distance">The maximal allowed distance between keys of a chunk. Note that this is an inline function - there must be `-` operator defined between `distance` and the keys of the series.</param>
<param name="f">A value selector that is called to aggregate each chunk.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val chunkWhile: cond: ('K -> 'K -> bool) -> series: Series<'K,'T> -> Series<'K,Series<'K,'T>> (requires equality)
<summary> Aggregates the input into a series of adacent chunks based on a condition on keys. A chunk is started once the specified `cond` function returns `false` when called on the first and the last key of the previous chunk. The chunks are then returned as a nested series. The key of each chunk is the key of the first element in the chunk. </summary>
<param name="cond">A function that is called on the first and the last key of a chunk to determine when a window should end.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val k1: DateTimeOffset
val k2: DateTimeOffset
property DateTimeOffset.Hour: int with get
<summary>Gets the hour component of the time represented by the current <see cref="T:System.DateTimeOffset" /> object.</summary>
<returns>The hour component of the current <see cref="T:System.DateTimeOffset" /> object. This property uses a 24-hour clock; the value ranges from 0 to 23.</returns>
property DateTimeOffset.Minute: int with get
<summary>Gets the minute component of the time represented by the current <see cref="T:System.DateTimeOffset" /> object.</summary>
<returns>The minute component of the current <see cref="T:System.DateTimeOffset" /> object, expressed as an integer between 0 and 59.</returns>
val pairwise: series: Series<'K,'T> -> Series<'K,('T * 'T)> (requires equality)
<summary> Returns a series containing the predecessor and an element for each input, except for the first one. The returned series is one key shorter (it does not contain a value for the first key). </summary>
<param name="series">The input series to be aggregated.</param>
<example><code> let input = series [ 1 =&gt; 'a'; 2 =&gt; 'b'; 3 =&gt; 'c'] let res = input |&gt; Series.pairwise res = series [2 =&gt; ('a', 'b'); 3 =&gt; ('b', 'c') ] </code></example>
<category>Grouping, windowing and chunking</category>
val pairwiseWith: f: ('K -> 'T * 'T -> 'a) -> series: Series<'K,'T> -> Series<'K,'a> (requires equality)
<summary> Aggregates the input into pairs containing the predecessor and an element for each input, except for the first one. Then calls the specified aggregation function `f` with a tuple and a key. The returned series is one key shorter (it does not contain a value for the first key). </summary>
<param name="f">A function that is called for each pair to produce result in the final series.</param>
<param name="series">The input series to be aggregated.</param>
<category>Grouping, windowing and chunking</category>
val k: DateTimeOffset
val v1: float
val v2: float
val mf: Series<DateTimeOffset,float>
TimeSpan.FromSeconds(value: float) : TimeSpan
TimeSpan.FromSeconds(seconds: int64) : TimeSpan
TimeSpan.FromSeconds(seconds: int64, ?milliseconds: int64, ?microseconds: int64) : TimeSpan
val keys: DateTimeOffset list
val m: float
DateTimeOffset.AddMinutes(minutes: float) : DateTimeOffset
val lookupAll: keys: 'K seq -> lookup: Lookup -> series: Series<'K,'T> -> Series<'K,'T> (requires equality)
<summary> Create a new series that contains values for all provided keys. Use the specified lookup semantics - for exact matching, use `getAll` </summary>
<param name="keys">A sequence of keys that will form the keys of the retunred sequence</param>
<param name="lookup">Lookup behavior to use when the value at the specified key does not exist</param>
<param name="series">The input series</param>
<category>Accessing series data and lookup</category>
Lookup.ExactOrGreater: Lookup = 3
<summary> Lookup a value associated with the specified key or with the nearest greater key that has a value available. Fails (or returns missing value) only when the specified key is greater than all available keys. </summary>
val resample: keys: 'K seq -> dir: Direction -> series: Series<'K,'V> -> Series<'K,Series<'K,'V>> (requires equality)
<summary> Resample the series based on a provided collection of keys. The values of the series are aggregated into chunks based on the specified keys. Depending on `direction`, the specified key is either used as the smallest or as the greatest key of the chunk (with the exception of boundaries that are added to the first/last chunk). Such chunks are then returned as nested series. </summary>
<param name="series">An input series to be resampled</param>
<param name="keys">A collection of keys to be used for resampling of the series</param>
<param name="dir">If this parameter is `Direction.Forward`, then each key is used as the smallest key in a chunk; for `Direction.Backward`, the keys are used as the greatest keys in a chunk.</param>
<remarks> This operation is only supported on ordered series. The method throws `InvalidOperationException` when the series is not ordered. </remarks>
<category>Sampling, resampling and advanced lookup</category>
[<Struct>] type Direction = | Backward = 0 | Forward = 1
<summary> Specifies in which direction should we look when performing operations such as `Series.Pairwise`. </summary>
<example><code> let abc = [ 1 =&gt; "a"; 2 =&gt; "b"; 3 =&gt; "c" ] |&gt; Series.ofObservations // Using 'Forward' the key of the first element is used abc.Pairwise(direction=Direction.Forward) // [ 1 =&gt; ("a", "b"); 2 =&gt; ("b", "c") ] // Using 'Backward' the key of the second element is used abc.Pairwise(direction=Direction.Backward) // [ 2 =&gt; ("a", "b"); 3 =&gt; ("b", "c") ] </code></example>
<category>Parameters and results of various operations</category>
Direction.Forward: Direction = 1
val resampleInto: keys: 'K seq -> dir: Direction -> f: ('K -> Series<'K,'V> -> 'a) -> series: Series<'K,'V> -> Series<'K,'a> (requires equality)
<summary> Resample the series based on a provided collection of keys. The values of the series are aggregated into chunks based on the specified keys. Depending on `direction`, the specified key is either used as the smallest or as the greatest key of the chunk (with the exception of boundaries that are added to the first/last chunk). Such chunks are then aggregated using the provided function `f`. </summary>
<param name="series">An input series to be resampled</param>
<param name="keys">A collection of keys to be used for resampling of the series</param>
<param name="dir">If this parameter is `Direction.Forward`, then each key is used as the smallest key in a chunk; for `Direction.Backward`, the keys are used as the greatest keys in a chunk.</param>
<param name="f">A function that is used to collapse a generated chunk into a single value. Note that this function may be called with empty series.</param>
<remarks> This operation is only supported on ordered series. The method throws `InvalidOperationException` when the series is not ordered. </remarks>
<category>Sampling, resampling and advanced lookup</category>
Direction.Backward: Direction = 0
val s: Series<DateTimeOffset,float>
val ds: Series<DateTimeOffset,float>
TimeSpan.FromHours(value: float) : TimeSpan
TimeSpan.FromHours(hours: int) : TimeSpan
TimeSpan.FromHours(hours: int, ?minutes: int64, ?seconds: int64, ?milliseconds: int64, ?microseconds: int64) : TimeSpan
val resampleEquiv: keyProj: ('K1 -> 'K2) -> series: Series<'K1,'V1> -> Series<'K2,Series<'K1,'V1>> (requires equality and equality)
<summary> Resample the series based on equivalence class on the keys. A specified function `keyProj` is used to project keys to another space and the observations for which the projected keys are equivalent are grouped into chunks. The chunks are then returned as nested series. </summary>
<param name="series">An input series to be resampled</param>
<param name="keyProj">A function that transforms keys from original space to a new space (which is then used for grouping based on equivalence)</param>
<remarks> This function is similar to `Series.chunkBy`, with the exception that it transforms keys to a new space. This operation is only supported on ordered series. The method throws `InvalidOperationException` when the series is not ordered. For unordered series, similar functionality can be implemented using `Series.groupBy`. </remarks>
<category>Sampling, resampling and advanced lookup</category>
val d: DateTimeOffset
static member SeriesExtensions.ResampleEquivalence: series: Series<'K,'V> * keyProj: Func<'K,'a> -> Series<'a,Series<'K,'V>> (requires equality and equality)
static member SeriesExtensions.ResampleEquivalence: series: Series<'K,'V> * keyProj: Func<'K,'a> * aggregate: Func<Series<'K,'V>,'b> -> Series<'a,'b> (requires equality and equality)
val days: string list
val nu: Series<DateTimeOffset,float>
val indexWith: keys: 'K2 seq -> series: Series<'K1,'T> -> Series<'K2,'T> (requires equality and equality)
<summary> Returns a new series containing the specified keys mapped to the original values of the series. When the sequence contains _fewer_ keys, the values from the series are dropped. When it contains _more_ keys, the values for additional keys are missing. </summary>
<category>Sorting and index manipulation</category>
val mapKeys: f: ('K -> 'R) -> series: Series<'K,'T> -> Series<'R,'T> (requires equality and equality)
<summary> Returns a new series whose keys are the results of applying the given function to keys of the original series. </summary>
<category>Series transformations</category>
DateTimeOffset.Parse(input: string) : DateTimeOffset
DateTimeOffset.Parse(input: string, formatProvider: IFormatProvider) : DateTimeOffset
DateTimeOffset.Parse(s: ReadOnlySpan<char>, provider: IFormatProvider) : DateTimeOffset
DateTimeOffset.Parse(input: string, formatProvider: IFormatProvider, styles: Globalization.DateTimeStyles) : DateTimeOffset
DateTimeOffset.Parse(input: ReadOnlySpan<char>, ?formatProvider: IFormatProvider, ?styles: Globalization.DateTimeStyles) : DateTimeOffset
val sampled: Series<DateTime,Series<DateTimeOffset,float>>
val resampleUniform: fillMode: Lookup -> keyProj: ('K1 -> 'K2) -> nextKey: ('K2 -> 'K2) -> series: Series<'K1,'V> -> Series<'K2,Series<'K1,'V>> (requires equality and comparison)
<summary> Resample the series based on equivalence class on the keys and also generate values for all keys of the target space that are between the minimal and maximal key of the specified series (e.g. generate value for all days in the range covered by the series). A specified function `keyProj` is used to project keys to another space and `nextKey` is used to generate all keys in the range. Then return the chunks as nested series. When there are no values for a (generated) key, then the function behaves according to `fillMode`. It can look at the greatest value of previous chunk or smallest value of the next chunk, or it produces an empty series. </summary>
<param name="series">An input series to be resampled</param>
<param name="fillMode">When set to `Lookup.NearestSmaller` or `Lookup.NearestGreater`, the function searches for a nearest available observation in an neighboring chunk. Otherwise, the function `f` is called with an empty series as an argument.</param>
<param name="keyProj">A function that transforms keys from original space to a new space (which is then used for grouping based on equivalence)</param>
<param name="nextKey">A function that gets the next key in the transformed space</param>
<remarks> This operation is only supported on ordered series. The method throws `InvalidOperationException` when the series is not ordered. </remarks>
<category>Sampling, resampling and advanced lookup</category>
val dt: DateTime
DateTime.AddDays(value: float) : DateTime
val mapValues: f: ('T -> 'R) -> series: Series<'K,'T> -> Series<'K,'R> (requires equality)
<summary> Returns a new series whose values are the results of applying the given function to values of the original series. This function skips over missing values and call the function with just values. It is also aliased using the `$` operator so you can write `series $ func` for `series |&gt; Series.mapValues func`. </summary>
<category>Series transformations</category>
val indexOrdinally: series: Series<'K,'T> -> Series<int,'T> (requires equality)
<summary> Return a new series containing the same values as the original series, but with ordinal index formed by `int` values starting from 0. </summary>
<category>Sorting and index manipulation</category>
static member Frame.ofRows: rows: ('R * #ISeries<'C>) seq -> Frame<'R,'C> (requires equality and equality)
static member Frame.ofRows: rows: Series<'R,#ISeries<'C>> -> Frame<'R,'C> (requires equality and equality)
val pr: Series<DateTimeOffset,float>
val sampleTime: interval: TimeSpan -> dir: Direction -> series: Series<'a,'b> -> Series<'a,Series<'a,'b>> (requires equality and member (+))
<summary> Performs sampling by time and returns chunks obtained by time-sampling as a nested series. The operation generates keys starting at the first key in the source series, using the specified `interval` and then obtains chunks based on these keys in a fashion similar to the `Series.resample` function. </summary>
<param name="series">An input series to be resampled</param>
<param name="interval">The interval between the individual samples</param>
<param name="dir">If this parameter is `Direction.Forward`, then each key is used as the smallest key in a chunk; for `Direction.Backward`, the keys are used as the greatest keys in a chunk.</param>
<remarks> This operation is only supported on ordered series. The method throws `InvalidOperationException` when the series is not ordered. </remarks>
<category>Sampling, resampling and advanced lookup</category>
val sampleTimeInto: interval: TimeSpan -> dir: Direction -> f: (Series<'K,'V> -> 'a) -> series: Series<'K,'V> -> Series<'K,'a> (requires equality and member (+))
<summary> Performs sampling by time and aggregates chunks obtained by time-sampling into a single value using a specified function. The operation generates keys starting at the first key in the source series, using the specified `interval` and then obtains chunks based on these keys in a fashion similar to the `Series.resample` function. </summary>
<param name="series">An input series to be resampled</param>
<param name="interval">The interval between the individual samples</param>
<param name="dir">If this parameter is `Direction.Forward`, then each key is used as the smallest key in a chunk; for `Direction.Backward`, the keys are used as the greatest keys in a chunk.</param>
<param name="f">A function that is called to aggregate each chunk into a single value.</param>
<remarks> This operation is only supported on ordered series. The method throws `InvalidOperationException` when the series is not ordered. </remarks>
<category>Sampling, resampling and advanced lookup</category>
val lastValue: series: Series<'K,'V> -> 'V (requires equality)
<summary> Returns the last value of the series. This fails if the last value is missing. </summary>
<category>Accessing series data and lookup</category>
val sample: Series<DateTimeOffset,float>
val diff1: Series<DateTimeOffset,float>
val diff: offset: int -> series: Series<'K,'T> -> Series<'K,'T> (requires equality and member (-))
<summary> Returns a series containing difference between a value in the original series and a value at the specified offset. For example, calling `Series.diff 1 s` returns a series where previous value is subtracted from the current one. In pseudo-code, the function behaves as follows: result[k] = series[k] - series[k - offset] </summary>
<param name="offset">When positive, subtracts the past values from the current values; when negative, subtracts the future values from the current values.</param>
<param name="series">The input series, containing values that support the `-` operator.</param>
<category>Series transformations</category>
val shift1: Series<DateTimeOffset,float>
val shift: offset: int -> series: Series<'K,'T> -> Series<'K,'T> (requires equality)
<summary> Returns a series with values shifted by the specified offset. When the offset is positive, the values are shifted forward and first `offset` keys are dropped. When the offset is negative, the values are shifted backwards and the last `offset` keys are dropped. Expressed in pseudo-code: result[k] = series[k - offset] </summary>
<param name="offset">Can be both positive and negative number.</param>
<param name="series">The input series to be shifted.</param>
<remarks> If you want to calculate the difference, e.g. `s - (Series.shift 1 s)`, you can use `Series.diff` which will be a little bit faster. </remarks>
<category>Series transformations</category>
val alignedDf: Frame<DateTimeOffset,string>
static member SeriesExtensions.Shift: series: Series<'K,'V> * offset: int -> Series<'K,'V> (requires equality)
val log: value: 'T -> 'T (requires member Log)
static member SeriesExtensions.Diff: series: Series<'K,float> * offset: int -> Series<'K,float> (requires equality)
val abs: value: 'T -> 'T (requires member Abs)
val adjust: v: float -> float
val v: float
val min: e1: 'T -> e2: 'T -> 'T (requires comparison)
val max: e1: 'T -> e2: 'T -> 'T (requires comparison)
static member Stats.sum: frame: Frame<'R,'C> -> Series<'C,float> (requires equality and equality)
static member Stats.sum: series: Series<'K,'V> -> float (requires equality)

Type something to start searching.