Header menu logo Deedle

Apache Arrow and Feather Integration

The Deedle.Arrow package adds first-class support for Apache Arrow — the industry-standard columnar in-memory format used by Python (pyarrow/pandas), R (arrow), Spark, DuckDB, and many other data tools.

Install it alongside Deedle:

dotnet add package Deedle.Arrow

Then open both namespaces:

open Deedle
open Deedle.Arrow

After opening Deedle.Arrow you gain access to a Frame module and a Series module that extend the usual Deedle API with Arrow-specific functions (e.g. Frame.readArrow, Frame.toRecordBatch).

Reading and writing Arrow / Feather files

The Arrow IPC file format (extension .arrow) is also the basis of Feather v2 files (extension .feather), so both are handled by the same functions.

Writing a frame to disk

// Build a small sample frame
let prices =
    frame [ "Open"  => Series.ofValues [ 100.0; 102.5; 101.0; 103.0 ]
            "Close" => Series.ofValues [ 101.5; 101.0; 103.5; 104.0 ]
            "Vol"   => Series.ofValues [ 12000; 15000; 11000; 14000 ] ]
No value returned by any evaluator
// Write as Arrow IPC file
Frame.writeArrow "/tmp/prices.arrow" prices

// Write as Feather v2 (identical format, different extension)
Frame.writeFeather "/tmp/prices.feather" prices

Reading a frame from disk

let prices2 = Frame.readArrow "/tmp/prices.arrow"
No value returned by any evaluator
let prices3 = Frame.readFeather "/tmp/prices.feather"
No value returned by any evaluator

Row keys after reading are always 0-based integers. See the section on row-key preservation below if you need to round-trip named rows.

Arrow IPC stream format

The Arrow IPC stream format (extension .arrows) is designed for network transport and streaming pipelines. Unlike the file format, it does not require seekable storage.

// Write to an in-memory stream
use ms = new MemoryStream()
Frame.writeArrowStream ms prices

// Rewind and read back
ms.Position <- 0L
let prices4 = Frame.readArrowStream ms
No value returned by any evaluator

Converting to and from RecordBatch

You can work directly with Apache Arrow RecordBatch objects — useful when integrating with other Arrow-based libraries such as DuckDB or DataFusion.

open Apache.Arrow

// Convert a frame to an Arrow RecordBatch
let batch : RecordBatch = Frame.toRecordBatch prices
printfn "Columns: %d, Rows: %d" batch.ColumnCount batch.Length

// Convert a RecordBatch back to a Deedle frame
let prices5 : Frame<int, string> = Frame.ofRecordBatch batch
No value returned by any evaluator

Converting Series to Arrow arrays

The Series module provides single-column Arrow conversions, useful for building Arrow arrays from Deedle series without going through a full frame.

let vols = Series.ofValues [ 12000; 15000; 11000; 14000 ]

// Convert to Arrow IArrowArray
let arrowArr : IArrowArray = Series.toArrowArray vols
printfn "Arrow array type: %s, length: %d" (arrowArr.GetType().Name) arrowArr.Length

// Convert back to a Series<int, obj>
let volsBack : Series<int, obj> = Series.ofArrowArray arrowArr
No value returned by any evaluator

Preserving row keys (round-trip with named rows)

By default, row keys are dropped when writing Arrow files. Use writeArrowWithIndex to serialise row keys into a special __index__ column, then readArrowWithIndex to restore them.

// Frame with string row keys
let monthly =
    let keys = [| "Jan"; "Feb"; "Mar"; "Apr" |]
    Frame.ofColumns [
        "Revenue", Series(keys, [| 1200.0; 1350.0; 1100.0; 1500.0 |])
        "Cost",    Series(keys, [| 800.0;  900.0;  750.0;  1000.0 |])
    ]
No value returned by any evaluator
Frame.writeArrowWithIndex "/tmp/monthly.arrow" monthly

// Read back, restoring original string row keys
let monthly2 : Frame<string, string> = Frame.readArrowWithIndex "/tmp/monthly.arrow"
No value returned by any evaluator

The same WithIndex variants exist for Feather files:

Frame.writeFeatherWithIndex "/tmp/monthly.feather" monthly
let monthly3 = Frame.readFeatherWithIndex "/tmp/monthly.feather"

Type mapping

The following .NET/Deedle types are mapped natively to Arrow array types:

.NET type

Arrow type

Notes

float / double

Float64

float32

Float32

int / int32

Int32

int64

Int64

int16

Int16

uint8 / byte

UInt8

uint16

UInt16

uint32

UInt32

uint64

UInt64

bool

Boolean

string

Utf8

DateTime

Timestamp(Microsecond, UTC)

Stored as UTC

DateTimeOffset

Timestamp(Microsecond, UTC)

Stored as UTC

Other

Utf8

Via ToString()

Deedle missing values are encoded as Arrow validity-bitmap nulls, so they survive round-trips with zero data loss.

When reading Arrow files written by other tools (Python, R, etc.) the following additional Arrow types are supported:

Arrow type

.NET type in Deedle

Date32

DateTime (date-only, midnight UTC)

Date64

DateTime (milliseconds since epoch, UTC)

Any unsigned integer

Corresponding .NET unsigned type

Interoperability with Python / pyarrow

Deedle.Arrow files are fully compatible with Python's pyarrow library.

Write from Deedle, read in Python:

// F# side
let df = Frame.ReadCsv("data.csv")
Frame.writeFeather "/tmp/data.feather" df
# Python side
import pyarrow.feather as feather
df = feather.read_table("/tmp/data.feather").to_pandas()
print(df.head())

Write from Python, read in Deedle:

# Python side
import pandas as pd
import pyarrow.feather as feather

df = pd.DataFrame({"A": [1.0, 2.0, 3.0], "B": ["x", "y", "z"]})
feather.write_feather(df, "/tmp/from_python.feather")
// F# side
let df = Frame.readFeather "/tmp/from_python.feather"
printfn "%A" df

NuGet package information

Package

NuGet

Deedle

Core library

Deedle.Arrow

Apache Arrow / Feather v2 integration

Deedle.Arrow depends on the official Apache.Arrow NuGet package (Apache Software Foundation, Apache 2.0 licence), which is the reference .NET implementation of the Apache Arrow spec.

// Clean up temp files
[ "/tmp/prices.arrow"; "/tmp/prices.feather"; "/tmp/monthly.arrow" ]
|> List.iter (fun p -> if File.Exists(p) then File.Delete(p))
namespace System
namespace System.IO
namespace Deedle
module Arrow from Deedle
<summary> Provides conversions between Deedle Frames/Series and Apache Arrow RecordBatches, and functions for reading and writing the Arrow IPC file and stream formats. </summary>
<category>Arrow integration</category>
val prices: Frame<int,string>
val frame: columns: ('a * #ISeries<'c>) seq -> Frame<'c,'a> (requires equality and equality)
<summary> A function for constructing data frame from a sequence of name - column pairs. This provides a nicer syntactic sugar for `Frame.ofColumns`. </summary>
<example> To create a simple frame with two columns, you can write: <code> frame [ "A" =&gt; series [ 1 =&gt; 30.0; 2 =&gt; 35.0 ] "B" =&gt; series [ 1 =&gt; 30.0; 3 =&gt; 40.0 ] ] </code></example>
<category>Frame construction</category>
Multiple items
module Series from Deedle.Arrow
<summary> Arrow-specific functions on Deedle <c>Series</c> values. Open <c>Deedle.Arrow</c> and then call these as <c>Series.toArrowArray</c>, <c>Series.ofArrowArray</c>, etc. </summary>

--------------------
module Series from Deedle
<summary> The `Series` module provides an F#-friendly API for working with data and time series. The API follows the usual design for collection-processing in F#, so the functions work well with the pipelining (<c>|&gt;</c>) operator. For example, given a series with ages, we can use `Series.filterValues` to filter outliers and then `Stats.mean` to calculate the mean: ages |&gt; Series.filterValues (fun v -&gt; v &gt; 0.0 &amp;&amp; v &lt; 120.0) |&gt; Stats.mean The module provides comprehensive set of functions for working with series. The same API is also exposed using C#-friendly extension methods. In C#, the above snippet could be written as: [lang=csharp] ages .Where(kvp =&gt; kvp.Value &gt; 0.0 &amp;&amp; kvp.Value &lt; 120.0) .Mean() For more information about similar frame-manipulation functions, see the `Frame` module. For more information about C#-friendly extensions, see `SeriesExtensions`. The functions in the `Series` module are grouped in a number of categories and documented below. Accessing series data and lookup -------------------------------- Functions in this category provide access to the values in the series. - The term _observation_ is used for a key value pair in the series. - When working with a sorted series, it is possible to perform lookup using keys that are not present in the series - you can specify to search for the previous or next available value using _lookup behavior_. - Functions such as `get` and `getAll` have their counterparts `lookup` and `lookupAll` that let you specify lookup behavior. - For most of the functions that may fail, there is a `try[Foo]` variant that returns `None` instead of failing. - Functions with a name ending with `At` perform lookup based on the absolute integer offset (and ignore the keys of the series) Series transformations ---------------------- Functions in this category perform standard transformations on series including projections, filtering, taking some sub-series of the series, aggregating values using scanning and so on. Projection and filtering functions generally skip over missing values, but there are variants `filterAll` and `mapAll` that let you handle missing values explicitly. Keys can be transformed using `mapKeys`. When you do not need to consider the keys, and only care about values, use `filterValues` and `mapValues` (which is also aliased as the `$` operator). Series supports standard set of folding functions including `reduce` and `fold` (to reduce series values into a single value) as well as the `scan[All]` function, which can be used to fold values of a series into a series of intermeidate folding results. The functions `take[Last]` and `skip[Last]` can be used to take a sub-series of the original source series by skipping a specified number of elements. Note that this does not require an ordered series and it ignores the index - for index-based lookup use slicing, such as `series.[lo .. hi]`, instead. Finally the `shift` function can be used to obtain a series with values shifted by the specified offset. This can be used e.g. to get previous value for each key using `Series.shift 1 ts`. The `diff` function calculates difference from previous value using `ts - (Series.shift offs ts)`. Processing series with exceptions --------------------------------- The functions in this group can be used to write computations over series that may fail. They use the type <c>tryval&lt;'T&gt;</c> which is defined as a discriminated union with two cases: Success containing a value, or Error containing an exception. The function `tryMap` lets you create <c>Series&lt;'K, tryval&lt;'T&gt;&gt;</c> by mapping over values of an original series. You can then extract values using `tryValues`, which throws `AggregateException` if there were any errors. Functions `tryErrors` and `trySuccesses` give series containing only errors and successes. You can fill failed values with a constant using `fillErrorsWith`. Hierarchical index operations ----------------------------- When the key of a series is tuple, the elements of the tuple can be treated as multiple levels of a index. For example <c>Series&lt;'K1 * 'K2, 'V&gt;</c> has two levels with keys of types <c>'K1</c> and <c>'K2</c> respectively. The functions in this cateogry provide a way for aggregating values in the series at one of the levels. For example, given a series `input` indexed by two-element tuple, you can calculate mean for different first-level values as follows: input |&gt; applyLevel fst Stats.mean Note that the `Stats` module provides helpers for typical statistical operations, so the above could be written just as `input |&gt; Stats.levelMean fst`. Grouping, windowing and chunking -------------------------------- This category includes functions that group data from a series in some way. Two key concepts here are _window_ and _chunk_. Window refers to (overlapping) sliding windows over the input series while chunk refers to non-overlapping blocks of the series. The boundary behavior can be specified using the `Boundary` flags. The value `Skip` means that boundaries (incomplete windows or chunks) should be skipped. The value `AtBeginning` and `AtEnding` can be used to define at which side should the boundary be returned (or skipped). For chunking, `AtBeginning ||| Skip` makes sense and it means that the incomplete chunk at the beginning should be skipped (aligning the last chunk with the end). The behavior may be specified in a number of ways (which is reflected in the name): - `dist` - using an absolute distance between the keys - `while` - using a condition on the first and last key - `size` - by specifying the absolute size of the window/chunk The functions ending with `Into` take a function to be applied to the window/chunk. The functions `window`, `windowInto` and `chunk`, `chunkInto` are simplified versions that take a size. There is also `pairwise` function for sliding window of size two. Missing values -------------- This group of functions provides a way of working with missing values in a series. The `dropMissing` function drops all keys for which there are no values in the series. The `withMissingFrom` function lets you copy missing values from another series. The remaining functions provide different mechanism for filling the missing values. * `fillMissingWith` fills missing values with a specified constant * `fillMissingUsing` calls a specified function for every missing value * `fillMissing` and variants propagates values from previous/later keys Sorting and index manipulation ------------------------------ A series that is sorted by keys allows a number of additional operations (such as lookup using the `Lookp.ExactOrSmaller` lookup behavior). However, it is also possible to sort series based on the values - although the functions for manipulation with series do not guarantee that the order will be preserved. To sort series by keys, use `sortByKey`. Other sorting functions let you sort the series using a specified comparer function (`sortWith`), using a projection function (`sortBy`) and using the default comparison (`sort`). In addition, you can also replace the keys of a series with other keys using `indexWith` or with integers using `indexOrdinally`. To pick and reorder series values using to match a list of keys use `realign`. Sampling, resampling and advanced lookup ---------------------------------------- Given a (typically) time series sampling or resampling makes it possible to get time series with representative values at lower or uniform frequency. We use the following terminology: - `lookup` and `sample` functions find values at specified key; if a key is not available, they can look for value associated with the nearest smaller or the nearest greater key. - `resample` function aggregate values values into chunks based on a specified collection of keys (e.g. explicitly provided times), or based on some relation between keys (e.g. date times having the same date). - `resampleUniform` is similar to resampling, but we specify keys by providing functions that generate a uniform sequence of keys (e.g. days), the operation also fills value for days that have no corresponding observations in the input sequence. Joining, merging and zipping ---------------------------- Given two series, there are two ways to combine the values. If the keys in the series are not overlapping (or you want to throw away values from one or the other series), then you can use `merge` or `mergeUsing`. To merge more than 2 series efficiently, use the `mergeAll` function, which has been optimized for large number of series. If you want to align two series, you can use the _zipping_ operation. This aligns two series based on their keys and gives you tuples of values. The default behavior (`zip`) uses outer join and exact matching. For ordered series, you can specify other forms of key lookups (e.g. find the greatest smaller key) using `zipAlign`. functions ending with `Into` are generally easier to use as they call a specified function to turn the tuple (of possibly missing values) into a new value. For more complicated behaviors, it is often convenient to use joins on frames instead of working with series. Create two frames with single columns and then use the join operation. The result will be a frame with two columns (which is easier to use than series of tuples). </summary>
<category>Frame and series operations</category>


--------------------
type Series = static member ofNullables: values: Nullable<'a> seq -> Series<int,'a> (requires default constructor and value type and 'a :> ValueType) static member ofObservations: observations: ('a * 'b) seq -> Series<'a,'b> (requires equality) static member ofOptionalObservations: observations: ('K * 'a option) seq -> Series<'K,'a> (requires equality) static member ofValues: values: 'a seq -> Series<int,'a>

--------------------
type Series<'K,'V (requires equality)> = interface ISeriesFormattable interface IFsiFormattable interface ISeries<'K> new: index: IIndex<'K> * vector: IVector<'V> * vectorBuilder: IVectorBuilder * indexBuilder: IIndexBuilder -> Series<'K,'V> + 3 overloads member After: lowerExclusive: 'K -> Series<'K,'V> member Aggregate: aggregation: Aggregation<'K> * keySelector: Func<DataSegment<Series<'K,'V>>,'TNewKey> * valueSelector: Func<DataSegment<Series<'K,'V>>,OptionalValue<'R>> -> Series<'TNewKey,'R> (requires equality) + 1 overload member AsyncMaterialize: unit -> Async<Series<'K,'V>> member Before: upperExclusive: 'K -> Series<'K,'V> member Between: lowerInclusive: 'K * upperInclusive: 'K -> Series<'K,'V> member Compare: another: Series<'K,'V> -> Series<'K,Diff<'V>> ...
<summary> The type <c>Series&lt;K, V&gt;</c> represents a data series consisting of values `V` indexed by keys `K`. The keys of a series may or may not be ordered </summary>
<category>Core frame and series types</category>


--------------------
new: pairs: Collections.Generic.KeyValuePair<'K,'V> seq -> Series<'K,'V>
new: keys: 'K seq * values: 'V seq -> Series<'K,'V>
new: keys: 'K array * values: 'V array -> Series<'K,'V>
new: index: Indices.IIndex<'K> * vector: IVector<'V> * vectorBuilder: Vectors.IVectorBuilder * indexBuilder: Indices.IIndexBuilder -> Series<'K,'V>
static member Series.ofValues: values: 'a seq -> Series<int,'a>
Multiple items
module Frame from Deedle.Arrow
<summary> Arrow-specific functions on Deedle <c>Frame</c> values. Open <c>Deedle.Arrow</c> and then call these as <c>Frame.readArrow</c>, <c>Frame.writeArrow</c>, <c>Frame.toRecordBatch</c>, etc. </summary>

--------------------
module Frame from Deedle
<summary> The `Frame` module provides an F#-friendly API for working with data frames. The module follows the usual desing for collection-processing in F#, so the functions work well with the pipelining operator (`|&gt;`). For example, given a frame with two columns representing prices, we can use `Frame.pctChange` to calculate daily returns like this: let df = frame [ "MSFT" =&gt; prices1; "AAPL" =&gt; prices2 ] let rets = df |&gt; Frame.pctChange 1 rets |&gt; Stats.mean Note that the `Stats.mean` operation is overloaded and works both on series (returning a number) and on frames (returning a series). You can also use `Frame.diff` if you need absolute differences rather than relative changes. The functions in this module are designed to be used from F#. For a C#-friendly API, see the `FrameExtensions` type. For working with individual series, see the `Series` module. The functions in the `Frame` module are grouped in a number of categories and documented below. Accessing frame data and lookup ------------------------------- Functions in this category provide access to the values in the fame. You can also add and remove columns from a frame (which both return a new value). - `addCol`, `replaceCol` and `dropCol` can be used to create a new data frame with a new column, by replacing an existing column with a new one, or by dropping an existing column - `cols` and `rows` return the columns or rows of a frame as a series containing objects; `getCols` and `getRows` return a generic series and cast the values to the type inferred from the context (columns or rows of incompatible types are skipped); `getNumericCols` returns columns of a type convertible to `float` for convenience. - You can get a specific row or column using `get[Col|Row]` or `lookup[Col|Row]` functions. The `lookup` variant lets you specify lookup behavior for key matching (e.g. find the nearest smaller key than the specified value). There are also `[try]get` and `[try]Lookup` functions that return optional values and functions returning entire observations (key together with the series). - `sliceCols` and `sliceRows` return a sub-frame containing only the specified columns or rows. Finally, `toArray2D` returns the frame data as a 2D array. Grouping, windowing and chunking -------------------------------- The basic grouping functions in this category can be used to group the rows of a data frame by a specified projection or column to create a frame with hierarchical index such as <c>Frame&lt;'K1 * 'K2, 'C&gt;</c>. The functions always aggregate rows, so if you want to group columns, you need to use `Frame.transpose` first. The function `groupRowsBy` groups rows by the value of a specified column. Use `groupRowsBy[Int|Float|String...]` if you want to specify the type of the column in an easier way than using type inference; `groupRowsUsing` groups rows using the specified _projection function_ and `groupRowsByIndex` projects the grouping key just from the row index. More advanced functions include: `aggregateRowsBy` which groups the rows by a specified sequence of columns and aggregates each group into a single value; `pivotTable` implements the pivoting operation [as documented in the tutorials](../frame.html#pivot). The `melt` and `unmelt` functions turn the data frame into a single data frame containing columns `Row`, `Column` and `Value` containing the data of the original frame; `unmelt` can be used to turn this representation back into an original frame. The `stack` and `unstack` functions implement pandas-style reshape operations. `stack` converts `Frame&lt;'R,'C&gt;` to a long-format `Frame&lt;'R*'C, string&gt;` where each cell becomes a row keyed by `(rowKey, colKey)` with a single `"Value"` column. `unstack` promotes the inner row-key level to column keys, producing `Frame&lt;'R1, 'C*'R2&gt;` from `Frame&lt;'R1*'R2,'C&gt;`. A simple windowing functions that are exposed for an entire frame operations are `window` and `windowInto`. For more complex windowing operations, you currently have to use `mapRows` or `mapCols` and apply windowing on individual series. Sorting and index manipulation ------------------------------ A frame is indexed by row keys and column keys. Both of these indices can be sorted (by the keys). A frame that is sorted allows a number of additional operations (such as lookup using the `Lookp.ExactOrSmaller` lookup behavior). The functions in this category provide ways for manipulating the indices. It is expected that most operations are done on rows and so more functions are available in a row-wise way. A frame can alwyas be transposed using `Frame.transpose`. Index operations: The existing row/column keys can be replaced by a sequence of new keys using the `indexColsWith` and `indexRowsWith` functions. Row keys can also be replaced by ordinal numbers using `indexRowsOrdinally`. The function `indexRows` uses the specified column of the original frame as the index. It removes the column from the resulting frame (to avoid this, use overloaded `IndexRows` method). This function infers the type of row keys from the context, so it is usually more convenient to use `indexRows[Date|String|Int|...]` functions. Finally, if you want to calculate the index value based on multiple columns of the row, you can use `indexRowsUsing`. Sorting frame rows: Frame rows can be sorted according to the value of a specified column using the `sortRows` function; `sortRowsBy` takes a projection function which lets you transform the value of a column (e.g. to project a part of the value). The functions `sortRowsByKey` and `sortColsByKey` sort the rows or columns using the default ordering on the key values. The result is a frame with ordered index. Expanding columns: When the frame contains a series with complex .NET objects such as F# records or C# classes, it can be useful to "expand" the column. This operation looks at the type of the objects, gets all properties of the objects (recursively) and generates multiple series representing the properties as columns. The function `expandCols` expands the specified columns while `expandAllCols` applies the expansion to all columns of the data frame. Frame transformations --------------------- Functions in this category perform standard transformations on data frames including projections, filtering, taking some sub-frame of the frame, aggregating values using scanning and so on. Projection and filtering functions such as `[map|filter][Cols|Rows]` call the specified function with the column or row key and an <c>ObjectSeries&lt;'K&gt;</c> representing the column or row. You can use functions ending with `Values` (such as `mapRowValues`) when you do not require the row key, but only the row series; `mapRowKeys` and `mapColKeys` can be used to transform the keys. You can use `reduceValues` to apply a custom reduction to values of columns. Other aggregations are available in the `Stats` module. You can also get a row with the greaterst or smallest value of a given column using `[min|max]RowBy`. The functions `take[Last]` and `skip[Last]` can be used to take a sub-frame of the original source frame by skipping a specified number of rows. Note that this does not require an ordered frame and it ignores the index - for index-based lookup use slicing, such as `df.Rows.[lo .. hi]`, instead. Finally the `shift` function can be used to obtain a frame with values shifted by the specified offset. This can be used e.g. to get previous value for each key using `Frame.shift 1 df`. The `diff` function calculates difference from previous value using `df - (Frame.shift offs df)`. Processing frames with exceptions --------------------------------- The functions in this group can be used to write computations over frames that may fail. They use the type <c>tryval&lt;'T&gt;</c> which is defined as a discriminated union with two cases: Success containing a value, or Error containing an exception. Using <c>tryval&lt;'T&gt;</c> as a value in a data frame is not generally recommended, because the type of values cannot be tracked in the type. For this reason, it is better to use <c>tryval&lt;'T&gt;</c> with individual series. However, `tryValues` and `fillErrorsWith` functions can be used to get values, or fill failed values inside an entire data frame. The `tryMapRows` function is more useful. It can be used to write a transformation that applies a computation (which may fail) to each row of a data frame. The resulting series is of type <c>Series&lt;'R, tryval&lt;'T&gt;&gt;</c> and can be processed using the <c>Series</c> module functions. Missing values -------------- This group of functions provides a way of working with missing values in a data frame. The category provides the following functions that can be used to fill missing values: * `fillMissingWith` fills missing values with a specified constant * `fillMissingUsing` calls a specified function for every missing value * `fillMissing` and variants propagates values from previous/later keys We use the terms _sparse_ and _dense_ to denote series that contain some missing values or do not contain any missing values, respectively. The functions `denseCols` and `denseRows` return a series that contains only dense columns or rows and all sparse rows or columns are replaced with a missing value. The `dropSparseCols` and `dropSparseRows` functions drop these missing values and return a frame with no missing values. Joining, merging and zipping ---------------------------- The simplest way to join two frames is to use the `join` operation which can be used to perform left, right, outer or inner join of two frames. When the row keys of the frames do not match exactly, you can use `joinAlign` which takes an additional parameter that specifies how to find matching key in left/right join (e.g. by taking the nearest smaller available key). Frames that do not contian overlapping values can be combined using `merge` (when combining just two frames) or using `mergeAll` (for larger number of frames). Tha latter is optimized to work well for a large number of data frames. Finally, frames with overlapping values can be combined using `zip`. It takes a function that is used to combine the overlapping values. A `zipAlign` function provides a variant with more flexible row key matching (as in `joinAlign`) Hierarchical index operations ----------------------------- A data frame has a hierarchical row index if the row index is formed by a tuple, such as <c>Frame&lt;'R1 * 'R2, 'C&gt;</c>. Frames of this kind are returned, for example, by the grouping functions such as <c>Frame.groupRowsBy</c>. The functions in this category provide ways for working with data frames that have hierarchical row keys. The functions <c>applyLevel</c> and <c>reduceLevel</c> can be used to reduce values according to one of the levels. The <c>applyLevel</c> function takes a reduction of type <c>Series&lt;'K, 'T&gt; -&gt; 'T</c> while <c>reduceLevel</c> reduces individual values using a function of type <c>'T -&gt; 'T -&gt; 'T</c>. The functions <c>nest</c> and <c>unnest</c> can be used to convert between frames with hierarchical indices (<c>Frame&lt;'K1 * 'K2, 'C&gt;</c>) and series of frames that represent individual groups (<c>Series&lt;'K1, Frame&lt;'K2, 'C&gt;&gt;</c>). The <c>nestBy</c> function can be used to perform group by operation and return the result as a series of frems. </summary>
<category>Frame and series operations</category>


--------------------
type Frame = static member ReadCsv: location: string * [<Optional>] hasHeaders: Nullable<bool> * [<Optional>] inferTypes: Nullable<bool> * [<Optional>] inferRows: Nullable<int> * [<Optional>] schema: string * [<Optional>] separators: string * [<Optional>] culture: string * [<Optional>] maxRows: Nullable<int> * [<Optional>] missingValues: string array * [<Optional>] preferOptions: bool * [<Optional>] encoding: Encoding -> Frame<int,string> + 1 overload static member ReadReader: reader: IDataReader -> Frame<int,string> static member CustomExpanders: Dictionary<Type,Func<obj,(string * Type * obj) seq>> with get static member NonExpandableInterfaces: ResizeArray<Type> with get static member NonExpandableTypes: HashSet<Type> with get
<summary> Provides static methods for creating frames, reading frame data from CSV files and database (via IDataReader). The type also provides global configuration for reflection-based expansion. </summary>
<category>Frame and series operations</category>


--------------------
type Frame<'TRowKey,'TColumnKey (requires equality and equality)> = interface IDynamicMetaObjectProvider interface INotifyCollectionChanged interface IFrameFormattable interface IFsiFormattable interface IFrame new: rowIndex: IIndex<'TRowKey> * columnIndex: IIndex<'TColumnKey> * data: IVector<IVector> * indexBuilder: IIndexBuilder * vectorBuilder: IVectorBuilder -> Frame<'TRowKey,'TColumnKey> + 1 overload member AddColumn: column: 'TColumnKey * series: 'V seq -> unit + 3 overloads member AggregateRowsBy: groupBy: 'TColumnKey seq * aggBy: 'TColumnKey seq * aggFunc: Func<Series<'TRowKey,'a>,'b> -> Frame<int,'TColumnKey> member Clone: unit -> Frame<'TRowKey,'TColumnKey> member ColumnApply: f: Func<Series<'TRowKey,'T>,ISeries<'TRowKey>> -> Frame<'TRowKey,'TColumnKey> + 1 overload ...
<summary> A frame is the key Deedle data structure (together with series). It represents a data table (think spreadsheet or CSV file) with multiple rows and columns. The frame consists of row index, column index and data. The indices are used for efficient lookup when accessing data by the row key `'TRowKey` or by the column key `'TColumnKey`. Deedle frames are optimized for the scenario when all values in a given column are of the same type (but types of different columns can differ). </summary>
<remarks><para>Joining, zipping and appending:</para><para> More info </para></remarks>
<category>Core frame and series types</category>


--------------------
new: names: 'TColumnKey seq * columns: ISeries<'TRowKey> seq -> Frame<'TRowKey,'TColumnKey>
new: rowIndex: Indices.IIndex<'TRowKey> * columnIndex: Indices.IIndex<'TColumnKey> * data: IVector<IVector> * indexBuilder: Indices.IIndexBuilder * vectorBuilder: Vectors.IVectorBuilder -> Frame<'TRowKey,'TColumnKey>
val writeArrow: path: string -> frame: Frame<'R,string> -> unit (requires equality)
<summary> Write a Deedle <c>Frame</c> to an Arrow IPC file (the standard <c>.arrow</c> format, also compatible with Feather v2 <c>.feather</c> files). </summary>
val writeFeather: path: string -> frame: Frame<'R,string> -> unit (requires equality)
<summary> Write a Deedle <c>Frame</c> to a Feather v2 file (<c>.feather</c>). Feather v2 is the Arrow IPC file format — this is an alias for <c>Frame.writeArrow</c>. </summary>
val prices2: Frame<int,string>
val readArrow: path: string -> Frame<int,string>
<summary> Read an Arrow IPC file (<c>.arrow</c> or Feather v2 <c>.feather</c>) into a Deedle <c>Frame&lt;int,string&gt;</c>. Files with multiple record batches are concatenated into a single frame. </summary>
val prices3: Frame<int,string>
val readFeather: path: string -> Frame<int,string>
<summary> Read a Feather v2 file (<c>.feather</c>) into a Deedle <c>Frame&lt;int,string&gt;</c>. Feather v2 is the Arrow IPC file format — this is an alias for <c>Frame.readArrow</c>. </summary>
val ms: MemoryStream
Multiple items
type MemoryStream = inherit Stream new: unit -> unit + 6 overloads member BeginRead: buffer: byte array * offset: int * count: int * callback: AsyncCallback * state: obj -> IAsyncResult member BeginWrite: buffer: byte array * offset: int * count: int * callback: AsyncCallback * state: obj -> IAsyncResult member CopyTo: destination: Stream * bufferSize: int -> unit member CopyToAsync: destination: Stream * bufferSize: int * cancellationToken: CancellationToken -> Task member EndRead: asyncResult: IAsyncResult -> int member EndWrite: asyncResult: IAsyncResult -> unit member Flush: unit -> unit member FlushAsync: cancellationToken: CancellationToken -> Task ...
<summary>Creates a stream whose backing store is memory.</summary>

--------------------
MemoryStream() : MemoryStream
MemoryStream(buffer: byte array) : MemoryStream
MemoryStream(capacity: int) : MemoryStream
MemoryStream(buffer: byte array, writable: bool) : MemoryStream
MemoryStream(buffer: byte array, index: int, count: int) : MemoryStream
MemoryStream(buffer: byte array, index: int, count: int, writable: bool) : MemoryStream
MemoryStream(buffer: byte array, index: int, count: int, writable: bool, publiclyVisible: bool) : MemoryStream
val writeArrowStream: stream: Stream -> frame: Frame<'R,string> -> unit (requires equality)
<summary> Write a Deedle <c>Frame</c> to an Arrow IPC stream (suitable for network transport or streaming pipelines). The stream is left open after writing. </summary>
property MemoryStream.Position: int64 with get, set
<summary>Gets or sets the current position within the stream.</summary>
<exception cref="T:System.ArgumentOutOfRangeException">The position is set to a negative value or a value greater than <see cref="F:System.Int32.MaxValue">Int32.MaxValue</see>.</exception>
<exception cref="T:System.ObjectDisposedException">The stream is closed.</exception>
<returns>The current position within the stream.</returns>
val prices4: Frame<int,string>
val readArrowStream: stream: Stream -> Frame<int,string>
<summary> Read an Arrow IPC stream into a Deedle <c>Frame&lt;int,string&gt;</c>. All record batches in the stream are concatenated into a single frame. </summary>
val batch: obj
val printfn: format: Printf.TextWriterFormat<'T> -> 'T
val prices5: Frame<int,string>
Multiple items
val int: value: 'T -> int (requires member op_Explicit)

--------------------
type int = int32

--------------------
type int<'Measure> = int
Multiple items
val string: value: 'T -> string

--------------------
type string = String
val vols: Series<int,int>
val arrowArr: obj
Object.GetType() : Type
val volsBack: Series<int,obj>
type obj = Object
val monthly: Frame<string,string>
val keys: string array
static member Frame.ofColumns: cols: Series<'C,#ISeries<'R>> -> Frame<'R,'C> (requires equality and equality)
static member Frame.ofColumns: cols: ('C * #ISeries<'R>) seq -> Frame<'R,'C> (requires equality and equality)
val writeArrowWithIndex: path: string -> frame: Frame<'R,string> -> unit (requires equality)
<summary> Write a Deedle <c>Frame</c> to an Arrow IPC file, storing row keys in a special <c>__index__</c> column so they can be restored on read. Row keys are serialised via <c>ToString()</c>. </summary>
val monthly2: Frame<string,string>
val readArrowWithIndex: path: string -> Frame<string,string>
<summary> Read an Arrow IPC file that was written with <c>Frame.writeArrowWithIndex</c>, restoring the original string row keys. If no <c>__index__</c> column is present, returns a frame with 0-based integer row keys converted to strings. </summary>
val monthly3: obj
val df: obj
Multiple items
module List from Microsoft.FSharp.Collections

--------------------
type List<'T> = | op_Nil | op_ColonColon of Head: 'T * Tail: 'T list interface IReadOnlyList<'T> interface IReadOnlyCollection<'T> interface IEnumerable interface IEnumerable<'T> member GetReverseIndex: rank: int * offset: int -> int member GetSlice: startIndex: int option * endIndex: int option -> 'T list static member Cons: head: 'T * tail: 'T list -> 'T list member Head: 'T with get member IsEmpty: bool with get member Item: index: int -> 'T with get ...
val iter: action: ('T -> unit) -> list: 'T list -> unit
val p: string
type File = static member AppendAllBytes: path: string * bytes: byte array -> unit + 1 overload static member AppendAllBytesAsync: path: string * bytes: byte array * ?cancellationToken: CancellationToken -> Task + 1 overload static member AppendAllLines: path: string * contents: IEnumerable<string> -> unit + 1 overload static member AppendAllLinesAsync: path: string * contents: IEnumerable<string> * encoding: Encoding * ?cancellationToken: CancellationToken -> Task + 1 overload static member AppendAllText: path: string * contents: string -> unit + 3 overloads static member AppendAllTextAsync: path: string * contents: string * encoding: Encoding * ?cancellationToken: CancellationToken -> Task + 3 overloads static member AppendText: path: string -> StreamWriter static member Copy: sourceFileName: string * destFileName: string -> unit + 1 overload static member Create: path: string -> FileStream + 2 overloads static member CreateSymbolicLink: path: string * pathToTarget: string -> FileSystemInfo ...
<summary>Provides static methods for the creation, copying, deletion, moving, and opening of a single file, and aids in the creation of <see cref="T:System.IO.FileStream" /> objects.</summary>
File.Exists([<NotNullWhenAttribute (true)>] path: string) : bool
File.Delete(path: string) : unit

Type something to start searching.