arkouda

Subpackages

Submodules

Package Contents

Classes

ArrayView

A multi-dimensional view of a pdarray. Arkouda ArraryView behaves similarly to numpy's ndarray.

BitVector

Represent integers as bit vectors, e.g. a set of flags.

CachedAccessor

Custom property-like object.

Categorical

Represents an array of values belonging to named categories. Converting a

Categorical

Represents an array of values belonging to named categories. Converting a

Categorical

Represents an array of values belonging to named categories. Converting a

DataFrame

A DataFrame structure based on arkouda arrays.

DataFrame

A DataFrame structure based on arkouda arrays.

Datetime

Represents a date and/or time.

Datetime

Represents a date and/or time.

Datetime

Represents a date and/or time.

DatetimeAccessor

DiffAggregate

A column in a GroupBy that has been differenced.

ErrorMode

Generic enumeration.

Fields

An integer-backed representation of a set of named binary fields, e.g. flags.

Generator

Generator exposes a number of methods for generating random

GroupBy

Group an array or list of arrays by value, usually in preparation

GroupBy

Group an array or list of arrays by value, usually in preparation

GroupBy

Group an array or list of arrays by value, usually in preparation

GroupBy

Group an array or list of arrays by value, usually in preparation

GroupBy

Group an array or list of arrays by value, usually in preparation

GroupBy

Group an array or list of arrays by value, usually in preparation

IPv4

Represent integers as IPv4 addresses.

Index

LogLevel

Generic enumeration.

MultiIndex

Power_divergenceResult

The results of a power divergence statistical test.

Properties

Row

This class is useful for printing and working with individual rows of a

SegArray

Series

One-dimensional arkouda array with axis labels.

StringAccessor

Timedelta

Represents a duration, the difference between two dates or times.

Timedelta

Represents a duration, the difference between two dates or times.

pdarray

The basic arkouda array class. This class contains only the

pdarray

The basic arkouda array class. This class contains only the

pdarray

The basic arkouda array class. This class contains only the

pdarray

The basic arkouda array class. This class contains only the

pdarray

The basic arkouda array class. This class contains only the

pdarray

The basic arkouda array class. This class contains only the

Functions

BitVectorizer([width, reverse])

Make a callback (i.e. function) that can be called on an

abs(→ arkouda.pdarrayclass.pdarray)

Return the element-wise absolute value of the array.

akabs(→ arkouda.pdarrayclass.pdarray)

Return the element-wise absolute value of the array.

akcast(→ Union[Union[arkouda.pdarrayclass.pdarray, ...)

Cast an array to another dtype.

akcast(→ Union[Union[arkouda.pdarrayclass.pdarray, ...)

Cast an array to another dtype.

align(*args)

Map multiple arrays of sparse identifiers to a common 0-up index.

all(→ numpy.bool_)

Return True iff all elements of the array evaluate to True.

any(→ numpy.bool_)

Return True iff any element of the array evaluates to True.

arange(→ arkouda.pdarrayclass.pdarray)

arange([start,] stop[, stride,] dtype=int64)

arange(→ arkouda.pdarrayclass.pdarray)

arange([start,] stop[, stride,] dtype=int64)

arange(→ arkouda.pdarrayclass.pdarray)

arange([start,] stop[, stride,] dtype=int64)

arange(→ arkouda.pdarrayclass.pdarray)

arange([start,] stop[, stride,] dtype=int64)

arange(→ arkouda.pdarrayclass.pdarray)

arange([start,] stop[, stride,] dtype=int64)

arange(→ arkouda.pdarrayclass.pdarray)

arange([start,] stop[, stride,] dtype=int64)

arccos(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse cosine of the array. The result is between 0 and pi.

arccosh(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse hyperbolic cosine of the array.

arcsin(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse sine of the array. The result is between -pi/2 and pi/2.

arcsinh(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse hyperbolic sine of the array.

arctan(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse tangent of the array. The result is between -pi/2 and pi/2.

arctan2(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse tangent of the array pair. The result chosen is the

arctanh(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse hyperbolic tangent of the array.

argmax(→ Union[numpy.int64, numpy.uint64])

Return the index of the first occurrence of the array max value.

argmaxk(→ pdarray)

Find the indices corresponding to the k maximum values of an array.

argmin(→ Union[numpy.int64, numpy.uint64])

Return the index of the first occurrence of the array min value.

argmink(→ pdarray)

Finds the indices corresponding to the k minimum values of an array.

argsort(→ arkouda.pdarrayclass.pdarray)

Return the permutation that sorts the array.

argsort(→ arkouda.pdarrayclass.pdarray)

Return the permutation that sorts the array.

argsort(→ arkouda.pdarrayclass.pdarray)

Return the permutation that sorts the array.

array(→ Union[arkouda.pdarrayclass.pdarray, ...)

Convert a Python or Numpy Iterable to a pdarray or Strings object, sending

array(→ Union[arkouda.pdarrayclass.pdarray, ...)

Convert a Python or Numpy Iterable to a pdarray or Strings object, sending

array(→ Union[arkouda.pdarrayclass.pdarray, ...)

Convert a Python or Numpy Iterable to a pdarray or Strings object, sending

array(→ Union[arkouda.pdarrayclass.pdarray, ...)

Convert a Python or Numpy Iterable to a pdarray or Strings object, sending

attach(name)

attach_all(names)

Attach to all objects registered with the names provide

attach_pdarray(→ pdarray)

class method to return a pdarray attached to the registered name in the arkouda

bigint_from_uint_arrays(arrays[, max_bits])

Create a bigint pdarray from an iterable of uint pdarrays.

broadcast(segments, values[, size, permutation])

Broadcast a dense column vector to the rows of a sparse matrix or grouped array.

broadcast(segments, values[, size, permutation])

Broadcast a dense column vector to the rows of a sparse matrix or grouped array.

broadcast(segments, values[, size, permutation])

Broadcast a dense column vector to the rows of a sparse matrix or grouped array.

broadcast(segments, values[, size, permutation])

Broadcast a dense column vector to the rows of a sparse matrix or grouped array.

broadcast_dims(→ Tuple[int, Ellipsis])

Algorithm to determine shape of broadcasted PD array given two array shapes

broadcast_to_shape(→ pdarray)

expand an array's rank to the specified shape using broadcasting

cast(→ Union[Union[arkouda.pdarrayclass.pdarray, ...)

Cast an array to another dtype.

cast(→ Union[Union[arkouda.pdarrayclass.pdarray, ...)

Cast an array to another dtype.

ceil(→ arkouda.pdarrayclass.pdarray)

Return the element-wise ceiling of the array.

check_np_dtype(→ None)

Assert that numpy dtype dt is one of the dtypes supported

chisquare(f_obs[, f_exp, ddof])

Computes the chi square statistic and p-value.

clear(→ None)

Send a clear message to clear all unregistered data from the server symbol table

clip(→ arkouda.pdarrayclass.pdarray)

Clip (limit) the values in an array to a given range [lo,hi]

clz(→ pdarray)

Count leading zeros for each integer in an array.

coargsort(→ arkouda.pdarrayclass.pdarray)

Return the permutation that groups the rows (left-to-right), if the

coargsort(→ arkouda.pdarrayclass.pdarray)

Return the permutation that groups the rows (left-to-right), if the

coargsort(→ arkouda.pdarrayclass.pdarray)

Return the permutation that groups the rows (left-to-right), if the

compute_join_size(→ Tuple[int, int])

Compute the internal size of a hypothetical join between a and b. Returns

concatenate(→ Union[arkouda.pdarrayclass.pdarray, ...)

Concatenate a list or tuple of pdarray or Strings objects into

concatenate(→ Union[arkouda.pdarrayclass.pdarray, ...)

Concatenate a list or tuple of pdarray or Strings objects into

concatenate(→ Union[arkouda.pdarrayclass.pdarray, ...)

Concatenate a list or tuple of pdarray or Strings objects into

convert_if_categorical(values)

Convert a Categorical array to Strings for display

corr(→ numpy.float64)

Return the correlation between x and y

cos(→ arkouda.pdarrayclass.pdarray)

Return the element-wise cosine of the array.

cosh(→ arkouda.pdarrayclass.pdarray)

Return the element-wise hyperbolic cosine of the array.

cov(→ numpy.float64)

Return the covariance of x and y

create_pdarray(→ pdarray)

Return a pdarray instance pointing to an array created by the arkouda server.

create_pdarray(→ pdarray)

Return a pdarray instance pointing to an array created by the arkouda server.

create_pdarray(→ pdarray)

Return a pdarray instance pointing to an array created by the arkouda server.

create_pdarray(→ pdarray)

Return a pdarray instance pointing to an array created by the arkouda server.

ctz(→ pdarray)

Count trailing zeros for each integer in an array.

cumprod(→ arkouda.pdarrayclass.pdarray)

Return the cumulative product over the array.

cumsum(→ arkouda.pdarrayclass.pdarray)

Return the cumulative sum over the array.

cumsum(→ arkouda.pdarrayclass.pdarray)

Return the cumulative sum over the array.

date_operators(cls)

date_range([start, end, periods, freq, tz, normalize, ...])

Creates a fixed frequency Datetime range. Alias for

date_range([start, end, periods, freq, tz, normalize, ...])

Creates a fixed frequency Datetime range. Alias for

deg2rad(→ arkouda.pdarrayclass.pdarray)

Converts angles element-wise from degrees to radians.

disableVerbose(→ None)

Disables verbose logging (DEBUG log level) for all ArkoudaLoggers, setting

divmod(→ Tuple[pdarray, pdarray])

param x:

The dividend array, the values that will be the numerator of the floordivision and will be

dot(→ Union[numpy.int64, numpy.float64, numpy.uint64, ...)

Returns the sum of the elementwise product of two arrays of the same size (the dot product) or

dtype(x)

enableVerbose(→ None)

Enables verbose logging (DEBUG log level) for all ArkoudaLoggers

exp(→ arkouda.pdarrayclass.pdarray)

Return the element-wise exponential of the array.

expm1(→ arkouda.pdarrayclass.pdarray)

Return the element-wise exponential of the array minus one.

export(read_path[, dataset_name, write_file, ...])

Export data from Arkouda file (Parquet/HDF5) to Pandas object or file formatted to be

find(query, space)

Return indices of query items in a search list of items (-1 if not found).

floor(→ arkouda.pdarrayclass.pdarray)

Return the element-wise floor of the array.

fmod(→ pdarray)

Returns the element-wise remainder of division.

from_series(→ Union[arkouda.pdarrayclass.pdarray, ...)

Converts a Pandas Series to an Arkouda pdarray or Strings object. If

from_series(→ Union[arkouda.pdarrayclass.pdarray, ...)

Converts a Pandas Series to an Arkouda pdarray or Strings object. If

full(→ Union[arkouda.pdarrayclass.pdarray, ...)

Create a pdarray filled with fill_value.

full(→ Union[arkouda.pdarrayclass.pdarray, ...)

Create a pdarray filled with fill_value.

full_like(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with fill_value of the same size and dtype as an existing

gen_ranges(starts, ends[, stride, return_lengths])

Generate a segmented array of variable-length, contiguous ranges between pairs of

gen_ranges(starts, ends[, stride, return_lengths])

Generate a segmented array of variable-length, contiguous ranges between pairs of

generic_concat(items[, ordered])

getArkoudaLogger(→ ArkoudaLogger)

A convenience method for instantiating an ArkoudaLogger that retrieves the

get_byteorder(→ str)

Get a concrete byteorder (turns '=' into '<' or '>')

get_callback(x)

get_columns(→ List[str])

Get a list of column names from CSV file(s).

get_datasets(→ List[str])

Get the names of the datasets in the provide files

get_filetype(→ str)

Get the type of a file accessible to the server. Supported

get_null_indices(→ Union[arkouda.pdarrayclass.pdarray, ...)

Get null indices of a string column in a Parquet file.

get_server_byteorder(→ str)

Get the server's byteorder

hash(→ Union[Tuple[arkouda.pdarrayclass.pdarray, ...)

Return an element-wise hash of the array or list of arrays.

hist_all(ak_df[, cols])

Create a grid plot histogramming all numeric columns in ak dataframe

histogram(→ Tuple[arkouda.pdarrayclass.pdarray, ...)

Compute a histogram of evenly spaced bins over the range of an array.

histogram(→ Tuple[arkouda.pdarrayclass.pdarray, ...)

Compute a histogram of evenly spaced bins over the range of an array.

histogram2d(→ Tuple[arkouda.pdarrayclass.pdarray, ...)

Compute the bi-dimensional histogram of two data samples with evenly spaced bins

histogramdd(→ Tuple[arkouda.pdarrayclass.pdarray, ...)

Compute the multidimensional histogram of data in sample with evenly spaced bins.

import_data(read_path[, write_file, return_obj, index])

Import data from a file saved by Pandas (HDF5/Parquet) to Arkouda object and/or

in1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Test whether each element of a 1-D array is also present in a second array.

in1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Test whether each element of a 1-D array is also present in a second array.

in1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Test whether each element of a 1-D array is also present in a second array.

in1d_intervals(vals, intervals[, symmetric])

Test each value for membership in any of a set of half-open (pythonic)

indexof1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Returns an integer array of the index values where the values of the first

information(→ str)

Returns JSON formatted string containing information about the objects in names

intersect(a, b[, positions, unique])

Find the intersection of two arkouda arrays.

intersect1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Find the intersection of two arrays.

interval_lookup(keys, values, arguments[, fillvalue, ...])

Apply a function defined over intervals to an array of arguments.

intx(a, b)

Find all the rows that are in both dataframes.

invert_permutation(perm)

Find the inverse of a permutation array.

ip_address(values)

Convert values to an Arkouda array of IP addresses.

isSupportedInt(num)

isSupportedInt(num)

isSupportedInt(num)

isSupportedNumber(num)

is_cosorted(arrays)

Return True iff the arrays are cosorted, i.e., if the arrays were columns in a table

is_ipv4(→ arkouda.pdarrayclass.pdarray)

Indicate which values are ipv4 when passed data containing IPv4 and IPv6 values.

is_ipv6(→ arkouda.pdarrayclass.pdarray)

Indicate which values are ipv6 when passed data containing IPv4 and IPv6 values.

is_registered(→ bool)

Determine if the name provided is associated with a registered Object

is_sorted(→ numpy.bool_)

Return True iff the array is monotonically non-decreasing.

is_sorted(→ numpy.bool_)

Return True iff the array is monotonically non-decreasing.

isfinite(→ arkouda.pdarrayclass.pdarray)

Return the element-wise isfinite check applied to the array.

isinf(→ arkouda.pdarrayclass.pdarray)

Return the element-wise isinf check applied to the array.

isnan(→ arkouda.pdarrayclass.pdarray)

Return the element-wise isnan check applied to the array.

isnan(→ arkouda.pdarrayclass.pdarray)

Return the element-wise isnan check applied to the array.

join_on_eq_with_dt(...)

Performs an inner-join on equality between two integer arrays where

left_align(left, right)

Map two arrays of sparse identifiers to the 0-up index set implied by the left array,

linspace(→ arkouda.pdarrayclass.pdarray)

Create a pdarray of linearly-spaced floats in a closed interval.

list_registry([detailed])

Return a list containing the names of all registered objects

list_symbol_table(→ List[str])

Return a list containing the names of all objects in the symbol table

load(→ Union[arkouda.pdarrayclass.pdarray, ...)

Load a pdarray previously saved with pdarray.save().

load_all(→ Mapping[str, ...)

Load multiple pdarrays, Strings, SegArrays, or Categoricals previously

log(→ arkouda.pdarrayclass.pdarray)

Return the element-wise natural log of the array.

log10(→ arkouda.pdarrayclass.pdarray)

Return the element-wise base 10 log of the array.

log1p(→ arkouda.pdarrayclass.pdarray)

Return the element-wise natural log of one plus the array.

log2(→ arkouda.pdarrayclass.pdarray)

Return the element-wise base 2 log of the array.

lookup(keys, values, arguments[, fillvalue])

Apply the function defined by the mapping keys --> values to arguments.

ls(→ List[str])

This function calls the h5ls utility on a HDF5 file visible to the

ls_csv(→ List[str])

Used for identifying the datasets within a file when a CSV does not

max(→ arkouda.dtypes.numpy_scalars)

Return the maximum value of the array.

maxk(→ pdarray)

Find the k maximum values of an array.

mean(→ numpy.float64)

Return the mean of the array.

merge(→ DataFrame)

Merge Arkouda DataFrames with a database-style join.

min(→ arkouda.dtypes.numpy_scalars)

Return the minimum value of the array.

mink(→ pdarray)

Find the k minimum values of an array.

mod(→ pdarray)

Returns the element-wise remainder of division.

ones(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with ones.

ones(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with ones.

ones(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with ones.

ones(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with ones.

ones_like(→ arkouda.pdarrayclass.pdarray)

Create a one-filled pdarray of the same size and dtype as an existing

parity(→ pdarray)

Find the bit parity (XOR of all bits) for each integer in an array.

plot_dist(b, h[, log, xlabel, newfig])

Plot the distribution and cumulative distribution of histogram Data

popcount(→ pdarray)

Find the population (number of bits set) for each integer in an array.

power(→ pdarray)

Raises an array to a power. If where is given, the operation will only take place in the positions

power_divergence(f_obs[, f_exp, ddof, lambda_])

Computes the power divergence statistic and p-value.

pretty_print_information(→ None)

Prints verbose information for each object in names in a human readable format

prod(→ numpy.float64)

Return the product of all elements in the array. Return value is

rad2deg(→ arkouda.pdarrayclass.pdarray)

Converts angles element-wise from radians to degrees.

randint(→ arkouda.pdarrayclass.pdarray)

Generate a pdarray of randomized int, float, or bool values in a

randint(→ arkouda.pdarrayclass.pdarray)

Generate a pdarray of randomized int, float, or bool values in a

random_strings_lognormal(→ arkouda.strings.Strings)

Generate random strings with log-normally distributed lengths and

random_strings_uniform(→ arkouda.strings.Strings)

Generate random strings with lengths uniformly distributed between

read(→ Union[arkouda.pdarrayclass.pdarray, ...)

Read datasets from files.

read_csv(→ Union[arkouda.pdarrayclass.pdarray, ...)

Read CSV file(s) into Arkouda objects. If more than one dataset is found, the objects

read_hdf(→ Union[arkouda.pdarrayclass.pdarray, ...)

Read Arkouda objects from HDF5 file/s

read_parquet(→ Union[arkouda.pdarrayclass.pdarray, ...)

Read Arkouda objects from Parquet file/s

read_tagged_data(filenames[, datasets, strictTypes, ...])

Read datasets from files and tag each record to the file it was read from.

receive(hostname, port)

Receive a pdarray sent by pdarray.transfer().

receive_dataframe(hostname, port)

Receive a pdarray sent by dataframe.transfer().

register_all(data)

Register all objects in the provided dictionary

resolve_scalar_dtype(→ str)

Try to infer what dtype arkouda_server should treat val as.

restore(filename)

Return data saved using ak.snapshot

right_align(left, right)

Map two arrays of sparse values to the 0-up index set implied by the right array,

rotl(→ pdarray)

Rotate bits of <x> to the left by <rot>.

rotr(→ pdarray)

Rotate bits of <x> to the left by <rot>.

round(→ arkouda.pdarrayclass.pdarray)

Return the element-wise rounding of the array.

save_all(→ None)

DEPRECATED

search_intervals(vals, intervals[, tiebreak, hierarchical])

Given an array of query vals and non-overlapping, closed intervals, return

segarray(segments, values[, lengths, grouping])

Alias for the from_parts function. Prevents user from needing to call ak.SegArray constructor

setdiff1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Find the set difference of two arrays.

setxor1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Find the set exclusive-or (symmetric difference) of two arrays.

sign(→ arkouda.pdarrayclass.pdarray)

Return the element-wise sign of the array.

sin(→ arkouda.pdarrayclass.pdarray)

Return the element-wise sine of the array.

sinh(→ arkouda.pdarrayclass.pdarray)

Return the element-wise hyperbolic sine of the array.

skew(→ numpy.float64)

Computes the sample skewness of an array.

snapshot(filename)

Create a snapshot of the current Arkouda namespace. All currently accessible variables containing

sort(→ arkouda.pdarrayclass.pdarray)

Return a sorted copy of the array. Only sorts numeric arrays;

sqrt(→ pdarray)

Takes the square root of array. If where is given, the operation will only take place in

square(→ arkouda.pdarrayclass.pdarray)

Return the element-wise square of the array.

standard_normal(→ arkouda.pdarrayclass.pdarray)

Draw real numbers from the standard normal distribution.

standard_normal(→ arkouda.pdarrayclass.pdarray)

Draw real numbers from the standard normal distribution.

std(→ numpy.float64)

Return the standard deviation of values in the array. The standard

string_operators(cls)

sum(→ numpy.float64)

Return the sum of all elements in the array.

tan(→ arkouda.pdarrayclass.pdarray)

Return the element-wise tangent of the array.

tanh(→ arkouda.pdarrayclass.pdarray)

Return the element-wise hyperbolic tangent of the array.

timedelta_range([start, end, periods, freq, name, closed])

Return a fixed frequency TimedeltaIndex, with day as the default

timedelta_range([start, end, periods, freq, name, closed])

Return a fixed frequency TimedeltaIndex, with day as the default

to_csv(columns, prefix_path[, names, col_delim, overwrite])

Write Arkouda object(s) to CSV file(s). All CSV Files written by Arkouda

to_hdf(→ None)

Save multiple named pdarrays to HDF5 files.

to_parquet(→ None)

Save multiple named pdarrays to Parquet files.

translate_np_dtype(→ Tuple[str, int])

Split numpy dtype dt into its kind and byte size, raising

trunc(→ arkouda.pdarrayclass.pdarray)

Return the element-wise truncation of the array.

uniform(, high, seed, ...)

Generate a pdarray with uniformly distributed random float values

uniform(, high, seed, ...)

Generate a pdarray with uniformly distributed random float values

union1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Find the union of two arrays/List of Arrays.

unique(→ Union[groupable, Tuple[groupable, ...)

Find the unique elements of an array.

unique(→ Union[groupable, Tuple[groupable, ...)

Find the unique elements of an array.

unique(→ Union[groupable, Tuple[groupable, ...)

Find the unique elements of an array.

unregister(→ str)

unregister_all(names)

Unregister all names provided

unregister_pdarray_by_name(→ None)

Unregister a named pdarray in the arkouda server which was previously

unsqueeze(p)

update_hdf(columns, prefix_path[, names, repack])

Overwrite the datasets with name appearing in names or keys in columns if columns

value_counts(→ Union[Categorical, ...)

Count the occurrences of the unique values of an array.

var(→ numpy.float64)

Return the variance of values in the array.

where(→ Union[arkouda.pdarrayclass.pdarray, ...)

Returns an array with elements chosen from A and B based upon a

where(→ Union[arkouda.pdarrayclass.pdarray, ...)

Returns an array with elements chosen from A and B based upon a

where(→ Union[arkouda.pdarrayclass.pdarray, ...)

Returns an array with elements chosen from A and B based upon a

write_log(log_msg[, tag, log_lvl])

Allows the user to write custom logs.

xlogy(x, y)

Computes x * log(y).

zero_up(vals)

Map an array of sparse values to 0-up indices.

zeros(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with zeros.

zeros(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with zeros.

zeros(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with zeros.

zeros(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with zeros.

zeros_like(→ arkouda.pdarrayclass.pdarray)

Create a zero-filled pdarray of the same size and dtype as an existing

Attributes

arkouda.ARKOUDA_SUPPORTED_DTYPES
arkouda.AllSymbols = '__AllSymbols__'
class arkouda.ArrayView(base: arkouda.pdarrayclass.pdarray, shape, order='row_major')[source]

A multi-dimensional view of a pdarray. Arkouda ArraryView behaves similarly to numpy’s ndarray. The base pdarray is stored in 1-dimension but can be indexed and treated logically as if it were multi-dimensional

base

The base pdarray that is being viewed as a multi-dimensional object

Type:

pdarray

dtype

The element type of the base pdarray (equivalent to base.dtype)

Type:

dtype

size

The number of elements in the base pdarray (equivalent to base.size)

Type:

int_scalars

shape

A pdarray specifying the sizes of each dimension of the array

Type:

pdarray[int]

ndim

Number of dimensions (equivalent to shape.size)

Type:

int_scalars

itemsize

The size in bytes of each element (equivalent to base.itemsize)

Type:

int_scalars

order

Index order to read and write the elements. By default or if ‘C’/’row_major’, read and write data in row_major order If ‘F’/’column_major’, read and write data in column_major order

Type:

str {‘C’/’row_major’ | ‘F’/’column_major’}

objType = 'ArrayView'
to_hdf(prefix_path: str, dataset: str = 'ArrayView', mode: str = 'truncate', file_type: str = 'distribute')[source]

Save the current ArrayView object to hdf5 file

Parameters:
  • prefix_path (str) – Path to the file to write the dataset to

  • dataset (str) – Name of the dataset to write

  • mode (str (truncate | append)) – Default: truncate Mode to write the dataset in. Truncate will overwrite any existing files. Append will add the dataset to an existing file.

  • file_type (str (single|distribute)) – Default: distribute Indicates the format to save the file. Single will store in a single file. Distribute will store the date in a file per locale.

to_list() list[source]

Convert the ArrayView to a list, transferring array data from the Arkouda server to client-side Python. Note: if the ArrayView size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A list with the same data as the ArrayView

Return type:

list

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the ArrayView size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

to_ndarray

Examples

>>> a = ak.arange(6).reshape(2,3)
>>> a.to_list()
[[0, 1, 2], [3, 4, 5]]
>>> type(a.to_list())
list
to_ndarray() numpy.ndarray[source]

Convert the ArrayView to a np.ndarray, transferring array data from the Arkouda server to client-side Python. Note: if the ArrayView size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A numpy ndarray with the same attributes and data as the ArrayView

Return type:

np.ndarray

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the ArrayView size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array, to_list

Examples

>>> a = ak.arange(6).reshape(2,3)
>>> a.to_ndarray()
array([[0, 1, 2],
       [3, 4, 5]])
>>> type(a.to_ndarray())
numpy.ndarray
update_hdf(prefix_path: str, dataset: str = 'ArrayView', repack: bool = True)[source]

Overwrite the dataset with the name provided with this array view object. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

str - success message if successful

Raises:

RuntimeError – Raised if a server-side error is thrown saving the array view

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

  • Because HDF5 deletes do not release memory, this will create a copy of the file with the new data

class arkouda.BitVector(values, width=64, reverse=False)[source]

Bases: arkouda.pdarrayclass.pdarray

Represent integers as bit vectors, e.g. a set of flags.

Parameters:
  • values (pdarray, int64) – The integers to represent as bit vectors

  • width (int) – The number of bit fields in the vector

  • reverse (bool) – If True, display bits from least significant (left) to most significant (right). By default, the most significant bit is the left-most bit.

Returns:

bitvectors – The array of binary vectors

Return type:

BitVector

Notes

This class is a thin wrapper around pdarray that mostly affects how values are displayed to the user. Operators and methods will typically treat this class like a uint64 pdarray.

conserves
special_objType = 'BitVector'
format(x)[source]

Format a single binary vector as a string.

classmethod from_return_msg(rep_msg)[source]
opeq(other, op)[source]
register(user_defined_name)[source]

Register this BitVector object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the BitVector is to be registered under, this will be the root name for underlying components

Returns:

The same BitVector which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different BitVectors with the same name.

Return type:

BitVector

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the BitVector with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

to_list()[source]

Export data to a list of string-formatted bit vectors.

to_ndarray()[source]

Export data to a numpy array of string-formatted bit vectors.

arkouda.BitVectorizer(width=64, reverse=False)[source]

Make a callback (i.e. function) that can be called on an array to create a BitVector.

Parameters:
  • width (int) – The number of bit fields in the vector

  • reverse (bool) – If True, display bits from least significant (left) to most significant (right). By default, the most significant bit is the left-most bit.

Returns:

bitvectorizer – A function that takes an array and returns a BitVector instance

Return type:

callable

class arkouda.CachedAccessor(name: str, accessor)[source]

Custom property-like object. A descriptor for caching accessors. :param name: Namespace that will be accessed under, e.g. df.foo. :type name: str :param accessor: Class with the extension methods. :type accessor: cls

Notes

For accessor, The class’s __init__ method assumes that one of Series, DataFrame or Index as the single argument data.

class arkouda.Categorical(values, **kwargs)[source]

Represents an array of values belonging to named categories. Converting a Strings object to Categorical often saves memory and speeds up operations, especially if there are many repeated values, at the cost of some one-time work in initialization.

Parameters:
  • values (Strings) – String values to convert to categories

  • NAvalue (str scalar) – The value to use to represent missing/null data

categories

The set of category labels (determined automatically)

Type:

Strings

codes

The category indices of the values or -1 for N/A

Type:

pdarray, int64

permutation

The permutation that groups the values in the same order as categories

Type:

pdarray, int64

segments

When values are grouped, the starting offset of each group

Type:

pdarray, int64

size

The number of items in the array

Type:

Union[int,np.int64]

nlevels

The number of distinct categories

Type:

Union[int,np.int64]

ndim

The rank of the array (currently only rank 1 arrays supported)

Type:

Union[int,np.int64]

shape

The sizes of each dimension of the array

Type:

tuple

property nbytes

The size of the Categorical in bytes.

Returns:

The size of the Categorical in bytes.

Return type:

int

BinOps
RegisterablePieces
RequiredPieces
dtype
objType = 'Categorical'
permutation
segments
argsort()[source]
static attach(user_defined_name: str) Categorical[source]

DEPRECATED Function to return a Categorical object attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which Categorical object was registered under

Returns:

The Categorical object created by re-attaching to the corresponding server components

Return type:

Categorical

Raises:

TypeError – if user_defined_name is not a string

concatenate(others: Sequence[Categorical], ordered: bool = True) Categorical[source]

Merge this Categorical with other Categorical objects in the array, concatenating the arrays and synchronizing the categories.

Parameters:
  • others (Sequence[Categorical]) – The Categorical arrays to concatenate and merge with this one

  • ordered (bool) – If True (default), the arrays will be appended in the order given. If False, array data may be interleaved in blocks, which can greatly improve performance but results in non-deterministic ordering of elements.

Returns:

The merged Categorical object

Return type:

Categorical

Raises:

TypeError – Raised if any others array objects are not Categorical objects

Notes

This operation can be expensive – slower than concatenating Strings.

contains(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element contains the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that contain substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

endswith(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element ends with the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that end with substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

classmethod from_codes(codes: arkouda.pdarrayclass.pdarray, categories: arkouda.strings.Strings, permutation=None, segments=None, **kwargs) Categorical[source]

Make a Categorical from codes and categories arrays. If codes and categories have already been pre-computed, this constructor saves time. If not, please use the normal constructor.

Parameters:
  • codes (pdarray, int64) – Category indices of each value

  • categories (Strings) – Unique category labels

  • permutation (pdarray, int64) – The permutation that groups the values in the same order as categories

  • segments (pdarray, int64) – When values are grouped, the starting offset of each group

Returns:

The Categorical object created from the input parameters

Return type:

Categorical

Raises:

TypeError – Raised if codes is not a pdarray of int64 objects or if categories is not a Strings object

classmethod from_return_msg(rep_msg) Categorical[source]

Create categorical from return message from server

Notes

This is currently only used when reading a Categorical from HDF5 files.

group() arkouda.pdarrayclass.pdarray[source]

Return the permutation that groups the array, placing equivalent categories together. All instances of the same category are guaranteed to lie in one contiguous block of the permuted array, but the blocks are not necessarily ordered.

Returns:

The permutation that groups the array by value

Return type:

pdarray

See also

GroupBy, unique

Notes

This method is faster than the corresponding Strings method. If the Categorical was created from a Strings object, then this function simply returns the cached permutation. Even if the Categorical was created using from_codes(), this function will be faster than Strings.group() because it sorts dense integer values, rather than 128-bit hash values.

hash() Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray][source]

Compute a 128-bit hash of each element of the Categorical.

Returns:

A tuple of two int64 pdarrays. The ith hash value is the concatenation of the ith values from each array.

Return type:

Tuple[pdarray,pdarray]

Notes

The implementation uses SipHash128, a fast and balanced hash function (used by Python for dictionaries and sets). For realistic numbers of strings (up to about 10**15), the probability of a collision between two 128-bit hash values is negligible.

in1d(test: arkouda.strings.Strings | Categorical) arkouda.pdarrayclass.pdarray[source]

Test whether each element of the Categorical object is also present in the test Strings or Categorical object.

Returns a boolean array the same length as self that is True where an element of self is in test and False otherwise.

Parameters:

test (Union[Strings,Categorical]) – The values against which to test each value of ‘self`.

Returns:

The values self[in1d] are in the test Strings or Categorical object.

Return type:

pdarray, bool

Raises:

TypeError – Raised if test is not a Strings or Categorical object

Notes

in1d can be considered as an element-wise function version of the python keyword in, for 1-D sequences. in1d(a, b) is logically equivalent to ak.array([item in b for item in a]), but is much faster and scales to arbitrarily large a.

Examples

>>> strings = ak.array([f'String {i}' for i in range(0,5)])
>>> cat = ak.Categorical(strings)
>>> ak.in1d(cat,strings)
array([True, True, True, True, True])
>>> strings = ak.array([f'String {i}' for i in range(5,9)])
>>> catTwo = ak.Categorical(strings)
>>> ak.in1d(cat,catTwo)
array([False, False, False, False, False])
info() str[source]

Returns a JSON formatted string containing information about all components of self

Parameters:

None

Returns:

JSON string containing information about all components of self

Return type:

str

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

isna()[source]

Find where values are missing or null (as defined by self.NAvalue)

static parse_hdf_categoricals(d: Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings]) Tuple[List[str], Dict[str, Categorical]][source]

This function should be used in conjunction with the load_all function which reads hdf5 files and reconstitutes Categorical objects. Categorical objects use a naming convention and HDF5 structure so they can be identified and constructed for the user.

In general you should not call this method directly

Parameters:

d (Dictionary of String to either Pdarray or Strings object)

Returns:

  • 2-Tuple of List of strings containing key names which should be removed and Dictionary of

  • base name to Categorical object

pretty_print_info() None[source]

Prints information about all components of self in a human readable format

Parameters:

None

Return type:

None

register(user_defined_name: str) Categorical[source]

Register this Categorical object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the Categorical is to be registered under, this will be the root name for underlying components

Returns:

The same Categorical which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Categoricals with the same name.

Return type:

Categorical

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the Categorical with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

reset_categories() Categorical[source]

Recompute the category labels, discarding any unused labels. This method is often useful after slicing or indexing a Categorical array, when the resulting array only contains a subset of the original categories. In this case, eliminating unused categories can speed up other operations.

Returns:

A Categorical object generated from the current instance

Return type:

Categorical

save(prefix_path: str, dataset: str = 'categorical_array', file_format: str = 'HDF5', mode: str = 'truncate', file_type: str = 'distribute', compression: str | None = None) str[source]

DEPRECATED Save the Categorical object to HDF5 or Parquet. The result is a collection of HDF5/Parquet files, one file per locale of the arkouda server, where each filename starts with prefix_path and dataset. Each locale saves its chunk of the Strings array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in HDF5 files (must not already exist) :type dataset: str :param file_format: The format to save the file to. :type file_format: str {‘HDF5 | ‘Parquet’} :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, create a new Categorical dataset within existing files.

Parameters:
  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

  • compression (str (Optional)) – {None | ‘snappy’ | ‘gzip’ | ‘brotli’ | ‘zstd’ | ‘lz4’} The compression type to use when writing. This is only supported for Parquet files and will not be used with HDF5.

Return type:

String message indicating result of save operation

Raises:
  • ValueError – Raised if the lengths of columns and values differ, or the mode is neither ‘truncate’ nor ‘append’

  • TypeError – Raised if prefix_path, dataset, or mode is not a str

Notes

Important implementation notes: (1) Strings state is saved as two datasets within an hdf5 group: one for the string characters and one for the segments corresponding to the start of each string, (2) the hdf5 group is named via the dataset parameter.

See also

-, -

set_categories(new_categories, NAvalue=None)[source]

Set categories to user-defined values.

Parameters:
  • new_categories (Strings) – The array of new categories to use. Must be unique.

  • NAvalue (str scalar) – The value to use to represent missing/null data

Returns:

A new Categorical with the user-defined categories. Old values present in new categories will appear unchanged. Old values not present will be assigned the NA value.

Return type:

Categorical

sort()[source]
classmethod standardize_categories(arrays, NAvalue='N/A')[source]

Standardize an array of Categoricals so that they share the same categories.

Parameters:
  • arrays (sequence of Categoricals) – The Categoricals to standardize

  • NAvalue (str scalar) – The value to use to represent missing/null data

Returns:

A list of the original Categoricals remapped to the shared categories.

Return type:

List of Categoricals

startswith(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element starts with the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that start with substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

to_hdf(prefix_path, dataset='categorical_array', mode='truncate', file_type='distribute')[source]

Save the Categorical to HDF5. The result is a collection of HDF5 files, one file per locale of the arkouda server, where each filename starts with prefix_path.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files will share

  • dataset (str) – Name prefix for saved data within the HDF5 file

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, add data as a new column to existing files.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale.

Return type:

None

See also

load

to_list() List[source]

Convert the Categorical to a list, transferring data from the arkouda server to Python. This conversion discards category information and produces a list of strings. If the arrays exceeds a built-in size limit, a RuntimeError is raised.

Returns:

A list of strings corresponding to the values in this Categorical

Return type:

list

Notes

The number of bytes in the Categorical cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but proceed with caution.

to_ndarray() numpy.ndarray[source]

Convert the array to a np.ndarray, transferring array data from the arkouda server to Python. This conversion discards category information and produces an ndarray of strings. If the arrays exceeds a built-in size limit, a RuntimeError is raised.

Returns:

A numpy ndarray of strings corresponding to the values in this array

Return type:

np.ndarray

Notes

The number of bytes in the array cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but proceed with caution.

to_parquet(prefix_path: str, dataset: str = 'categorical_array', mode: str = 'truncate', compression: str | None = None) str[source]

This functionality is currently not supported and will also raise a RuntimeError. Support is in development. Save the Categorical to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in HDF5 files (must not already exist)

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, create a new Categorical dataset within existing files.

  • compression (str (Optional)) – Default None Provide the compression type to use when writing the file. Supported values: snappy, gzip, brotli, zstd, lz4

Return type:

String message indicating result of save operation

Raises:

RuntimeError – On run due to compatability issues of Categorical with Parquet.

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - ‘append’ write mode is supported, but is not efficient. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

See also

to_hdf

to_strings() List[source]

Convert the Categorical to Strings.

Returns:

A Strings object corresponding to the values in this Categorical.

Return type:

arkouda.strings.Strings

Examples

>>> from arkouda import ak
>>> ak.connect()
>>> a = ak.array(["a","b","c"])
>>> a
>>> c = ak.Categorical(a)
>>>  c.to_strings()
c.to_strings()
>>> isinstance(c.to_strings(), ak.Strings)
True
transfer(hostname: str, port: arkouda.dtypes.int_scalars)[source]

Sends a Categorical object to a different Arkouda server

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the Categorical is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Return type:

A message indicating a complete transfer

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unique() Categorical[source]
unregister() None[source]

Unregister this Categorical object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

static unregister_categorical_by_name(user_defined_name: str) None[source]

Function to unregister Categorical object by name which was registered with the arkouda server via register()

Parameters:

user_defined_name (str) – Name under which the Categorical object was registered

Raises:
  • TypeError – if user_defined_name is not a string

  • RegistrationError – if there is an issue attempting to unregister any underlying components

update_hdf(prefix_path, dataset='categorical_array', repack=True)[source]

Overwrite the dataset with the name provided with this Categorical object. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the Categorical

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

  • Because HDF5 deletes do not release memory, the repack option allows for automatic creation of a file without the inaccessible data.

class arkouda.Categorical(values, **kwargs)[source]

Represents an array of values belonging to named categories. Converting a Strings object to Categorical often saves memory and speeds up operations, especially if there are many repeated values, at the cost of some one-time work in initialization.

Parameters:
  • values (Strings) – String values to convert to categories

  • NAvalue (str scalar) – The value to use to represent missing/null data

categories

The set of category labels (determined automatically)

Type:

Strings

codes

The category indices of the values or -1 for N/A

Type:

pdarray, int64

permutation

The permutation that groups the values in the same order as categories

Type:

pdarray, int64

segments

When values are grouped, the starting offset of each group

Type:

pdarray, int64

size

The number of items in the array

Type:

Union[int,np.int64]

nlevels

The number of distinct categories

Type:

Union[int,np.int64]

ndim

The rank of the array (currently only rank 1 arrays supported)

Type:

Union[int,np.int64]

shape

The sizes of each dimension of the array

Type:

tuple

property nbytes

The size of the Categorical in bytes.

Returns:

The size of the Categorical in bytes.

Return type:

int

BinOps
RegisterablePieces
RequiredPieces
dtype
objType = 'Categorical'
permutation
segments
argsort()[source]
static attach(user_defined_name: str) Categorical[source]

DEPRECATED Function to return a Categorical object attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which Categorical object was registered under

Returns:

The Categorical object created by re-attaching to the corresponding server components

Return type:

Categorical

Raises:

TypeError – if user_defined_name is not a string

concatenate(others: Sequence[Categorical], ordered: bool = True) Categorical[source]

Merge this Categorical with other Categorical objects in the array, concatenating the arrays and synchronizing the categories.

Parameters:
  • others (Sequence[Categorical]) – The Categorical arrays to concatenate and merge with this one

  • ordered (bool) – If True (default), the arrays will be appended in the order given. If False, array data may be interleaved in blocks, which can greatly improve performance but results in non-deterministic ordering of elements.

Returns:

The merged Categorical object

Return type:

Categorical

Raises:

TypeError – Raised if any others array objects are not Categorical objects

Notes

This operation can be expensive – slower than concatenating Strings.

contains(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element contains the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that contain substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

endswith(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element ends with the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that end with substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

classmethod from_codes(codes: arkouda.pdarrayclass.pdarray, categories: arkouda.strings.Strings, permutation=None, segments=None, **kwargs) Categorical[source]

Make a Categorical from codes and categories arrays. If codes and categories have already been pre-computed, this constructor saves time. If not, please use the normal constructor.

Parameters:
  • codes (pdarray, int64) – Category indices of each value

  • categories (Strings) – Unique category labels

  • permutation (pdarray, int64) – The permutation that groups the values in the same order as categories

  • segments (pdarray, int64) – When values are grouped, the starting offset of each group

Returns:

The Categorical object created from the input parameters

Return type:

Categorical

Raises:

TypeError – Raised if codes is not a pdarray of int64 objects or if categories is not a Strings object

classmethod from_return_msg(rep_msg) Categorical[source]

Create categorical from return message from server

Notes

This is currently only used when reading a Categorical from HDF5 files.

group() arkouda.pdarrayclass.pdarray[source]

Return the permutation that groups the array, placing equivalent categories together. All instances of the same category are guaranteed to lie in one contiguous block of the permuted array, but the blocks are not necessarily ordered.

Returns:

The permutation that groups the array by value

Return type:

pdarray

See also

GroupBy, unique

Notes

This method is faster than the corresponding Strings method. If the Categorical was created from a Strings object, then this function simply returns the cached permutation. Even if the Categorical was created using from_codes(), this function will be faster than Strings.group() because it sorts dense integer values, rather than 128-bit hash values.

hash() Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray][source]

Compute a 128-bit hash of each element of the Categorical.

Returns:

A tuple of two int64 pdarrays. The ith hash value is the concatenation of the ith values from each array.

Return type:

Tuple[pdarray,pdarray]

Notes

The implementation uses SipHash128, a fast and balanced hash function (used by Python for dictionaries and sets). For realistic numbers of strings (up to about 10**15), the probability of a collision between two 128-bit hash values is negligible.

in1d(test: arkouda.strings.Strings | Categorical) arkouda.pdarrayclass.pdarray[source]

Test whether each element of the Categorical object is also present in the test Strings or Categorical object.

Returns a boolean array the same length as self that is True where an element of self is in test and False otherwise.

Parameters:

test (Union[Strings,Categorical]) – The values against which to test each value of ‘self`.

Returns:

The values self[in1d] are in the test Strings or Categorical object.

Return type:

pdarray, bool

Raises:

TypeError – Raised if test is not a Strings or Categorical object

Notes

in1d can be considered as an element-wise function version of the python keyword in, for 1-D sequences. in1d(a, b) is logically equivalent to ak.array([item in b for item in a]), but is much faster and scales to arbitrarily large a.

Examples

>>> strings = ak.array([f'String {i}' for i in range(0,5)])
>>> cat = ak.Categorical(strings)
>>> ak.in1d(cat,strings)
array([True, True, True, True, True])
>>> strings = ak.array([f'String {i}' for i in range(5,9)])
>>> catTwo = ak.Categorical(strings)
>>> ak.in1d(cat,catTwo)
array([False, False, False, False, False])
info() str[source]

Returns a JSON formatted string containing information about all components of self

Parameters:

None

Returns:

JSON string containing information about all components of self

Return type:

str

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

isna()[source]

Find where values are missing or null (as defined by self.NAvalue)

static parse_hdf_categoricals(d: Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings]) Tuple[List[str], Dict[str, Categorical]][source]

This function should be used in conjunction with the load_all function which reads hdf5 files and reconstitutes Categorical objects. Categorical objects use a naming convention and HDF5 structure so they can be identified and constructed for the user.

In general you should not call this method directly

Parameters:

d (Dictionary of String to either Pdarray or Strings object)

Returns:

  • 2-Tuple of List of strings containing key names which should be removed and Dictionary of

  • base name to Categorical object

pretty_print_info() None[source]

Prints information about all components of self in a human readable format

Parameters:

None

Return type:

None

register(user_defined_name: str) Categorical[source]

Register this Categorical object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the Categorical is to be registered under, this will be the root name for underlying components

Returns:

The same Categorical which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Categoricals with the same name.

Return type:

Categorical

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the Categorical with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

reset_categories() Categorical[source]

Recompute the category labels, discarding any unused labels. This method is often useful after slicing or indexing a Categorical array, when the resulting array only contains a subset of the original categories. In this case, eliminating unused categories can speed up other operations.

Returns:

A Categorical object generated from the current instance

Return type:

Categorical

save(prefix_path: str, dataset: str = 'categorical_array', file_format: str = 'HDF5', mode: str = 'truncate', file_type: str = 'distribute', compression: str | None = None) str[source]

DEPRECATED Save the Categorical object to HDF5 or Parquet. The result is a collection of HDF5/Parquet files, one file per locale of the arkouda server, where each filename starts with prefix_path and dataset. Each locale saves its chunk of the Strings array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in HDF5 files (must not already exist) :type dataset: str :param file_format: The format to save the file to. :type file_format: str {‘HDF5 | ‘Parquet’} :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, create a new Categorical dataset within existing files.

Parameters:
  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

  • compression (str (Optional)) – {None | ‘snappy’ | ‘gzip’ | ‘brotli’ | ‘zstd’ | ‘lz4’} The compression type to use when writing. This is only supported for Parquet files and will not be used with HDF5.

Return type:

String message indicating result of save operation

Raises:
  • ValueError – Raised if the lengths of columns and values differ, or the mode is neither ‘truncate’ nor ‘append’

  • TypeError – Raised if prefix_path, dataset, or mode is not a str

Notes

Important implementation notes: (1) Strings state is saved as two datasets within an hdf5 group: one for the string characters and one for the segments corresponding to the start of each string, (2) the hdf5 group is named via the dataset parameter.

See also

-, -

set_categories(new_categories, NAvalue=None)[source]

Set categories to user-defined values.

Parameters:
  • new_categories (Strings) – The array of new categories to use. Must be unique.

  • NAvalue (str scalar) – The value to use to represent missing/null data

Returns:

A new Categorical with the user-defined categories. Old values present in new categories will appear unchanged. Old values not present will be assigned the NA value.

Return type:

Categorical

sort()[source]
classmethod standardize_categories(arrays, NAvalue='N/A')[source]

Standardize an array of Categoricals so that they share the same categories.

Parameters:
  • arrays (sequence of Categoricals) – The Categoricals to standardize

  • NAvalue (str scalar) – The value to use to represent missing/null data

Returns:

A list of the original Categoricals remapped to the shared categories.

Return type:

List of Categoricals

startswith(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element starts with the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that start with substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

to_hdf(prefix_path, dataset='categorical_array', mode='truncate', file_type='distribute')[source]

Save the Categorical to HDF5. The result is a collection of HDF5 files, one file per locale of the arkouda server, where each filename starts with prefix_path.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files will share

  • dataset (str) – Name prefix for saved data within the HDF5 file

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, add data as a new column to existing files.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale.

Return type:

None

See also

load

to_list() List[source]

Convert the Categorical to a list, transferring data from the arkouda server to Python. This conversion discards category information and produces a list of strings. If the arrays exceeds a built-in size limit, a RuntimeError is raised.

Returns:

A list of strings corresponding to the values in this Categorical

Return type:

list

Notes

The number of bytes in the Categorical cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but proceed with caution.

to_ndarray() numpy.ndarray[source]

Convert the array to a np.ndarray, transferring array data from the arkouda server to Python. This conversion discards category information and produces an ndarray of strings. If the arrays exceeds a built-in size limit, a RuntimeError is raised.

Returns:

A numpy ndarray of strings corresponding to the values in this array

Return type:

np.ndarray

Notes

The number of bytes in the array cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but proceed with caution.

to_parquet(prefix_path: str, dataset: str = 'categorical_array', mode: str = 'truncate', compression: str | None = None) str[source]

This functionality is currently not supported and will also raise a RuntimeError. Support is in development. Save the Categorical to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in HDF5 files (must not already exist)

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, create a new Categorical dataset within existing files.

  • compression (str (Optional)) – Default None Provide the compression type to use when writing the file. Supported values: snappy, gzip, brotli, zstd, lz4

Return type:

String message indicating result of save operation

Raises:

RuntimeError – On run due to compatability issues of Categorical with Parquet.

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - ‘append’ write mode is supported, but is not efficient. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

See also

to_hdf

to_strings() List[source]

Convert the Categorical to Strings.

Returns:

A Strings object corresponding to the values in this Categorical.

Return type:

arkouda.strings.Strings

Examples

>>> from arkouda import ak
>>> ak.connect()
>>> a = ak.array(["a","b","c"])
>>> a
>>> c = ak.Categorical(a)
>>>  c.to_strings()
c.to_strings()
>>> isinstance(c.to_strings(), ak.Strings)
True
transfer(hostname: str, port: arkouda.dtypes.int_scalars)[source]

Sends a Categorical object to a different Arkouda server

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the Categorical is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Return type:

A message indicating a complete transfer

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unique() Categorical[source]
unregister() None[source]

Unregister this Categorical object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

static unregister_categorical_by_name(user_defined_name: str) None[source]

Function to unregister Categorical object by name which was registered with the arkouda server via register()

Parameters:

user_defined_name (str) – Name under which the Categorical object was registered

Raises:
  • TypeError – if user_defined_name is not a string

  • RegistrationError – if there is an issue attempting to unregister any underlying components

update_hdf(prefix_path, dataset='categorical_array', repack=True)[source]

Overwrite the dataset with the name provided with this Categorical object. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the Categorical

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

  • Because HDF5 deletes do not release memory, the repack option allows for automatic creation of a file without the inaccessible data.

class arkouda.Categorical(values, **kwargs)[source]

Represents an array of values belonging to named categories. Converting a Strings object to Categorical often saves memory and speeds up operations, especially if there are many repeated values, at the cost of some one-time work in initialization.

Parameters:
  • values (Strings) – String values to convert to categories

  • NAvalue (str scalar) – The value to use to represent missing/null data

categories

The set of category labels (determined automatically)

Type:

Strings

codes

The category indices of the values or -1 for N/A

Type:

pdarray, int64

permutation

The permutation that groups the values in the same order as categories

Type:

pdarray, int64

segments

When values are grouped, the starting offset of each group

Type:

pdarray, int64

size

The number of items in the array

Type:

Union[int,np.int64]

nlevels

The number of distinct categories

Type:

Union[int,np.int64]

ndim

The rank of the array (currently only rank 1 arrays supported)

Type:

Union[int,np.int64]

shape

The sizes of each dimension of the array

Type:

tuple

property nbytes

The size of the Categorical in bytes.

Returns:

The size of the Categorical in bytes.

Return type:

int

BinOps
RegisterablePieces
RequiredPieces
dtype
objType = 'Categorical'
permutation
segments
argsort()[source]
static attach(user_defined_name: str) Categorical[source]

DEPRECATED Function to return a Categorical object attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which Categorical object was registered under

Returns:

The Categorical object created by re-attaching to the corresponding server components

Return type:

Categorical

Raises:

TypeError – if user_defined_name is not a string

concatenate(others: Sequence[Categorical], ordered: bool = True) Categorical[source]

Merge this Categorical with other Categorical objects in the array, concatenating the arrays and synchronizing the categories.

Parameters:
  • others (Sequence[Categorical]) – The Categorical arrays to concatenate and merge with this one

  • ordered (bool) – If True (default), the arrays will be appended in the order given. If False, array data may be interleaved in blocks, which can greatly improve performance but results in non-deterministic ordering of elements.

Returns:

The merged Categorical object

Return type:

Categorical

Raises:

TypeError – Raised if any others array objects are not Categorical objects

Notes

This operation can be expensive – slower than concatenating Strings.

contains(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element contains the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that contain substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

endswith(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element ends with the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that end with substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

classmethod from_codes(codes: arkouda.pdarrayclass.pdarray, categories: arkouda.strings.Strings, permutation=None, segments=None, **kwargs) Categorical[source]

Make a Categorical from codes and categories arrays. If codes and categories have already been pre-computed, this constructor saves time. If not, please use the normal constructor.

Parameters:
  • codes (pdarray, int64) – Category indices of each value

  • categories (Strings) – Unique category labels

  • permutation (pdarray, int64) – The permutation that groups the values in the same order as categories

  • segments (pdarray, int64) – When values are grouped, the starting offset of each group

Returns:

The Categorical object created from the input parameters

Return type:

Categorical

Raises:

TypeError – Raised if codes is not a pdarray of int64 objects or if categories is not a Strings object

classmethod from_return_msg(rep_msg) Categorical[source]

Create categorical from return message from server

Notes

This is currently only used when reading a Categorical from HDF5 files.

group() arkouda.pdarrayclass.pdarray[source]

Return the permutation that groups the array, placing equivalent categories together. All instances of the same category are guaranteed to lie in one contiguous block of the permuted array, but the blocks are not necessarily ordered.

Returns:

The permutation that groups the array by value

Return type:

pdarray

See also

GroupBy, unique

Notes

This method is faster than the corresponding Strings method. If the Categorical was created from a Strings object, then this function simply returns the cached permutation. Even if the Categorical was created using from_codes(), this function will be faster than Strings.group() because it sorts dense integer values, rather than 128-bit hash values.

hash() Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray][source]

Compute a 128-bit hash of each element of the Categorical.

Returns:

A tuple of two int64 pdarrays. The ith hash value is the concatenation of the ith values from each array.

Return type:

Tuple[pdarray,pdarray]

Notes

The implementation uses SipHash128, a fast and balanced hash function (used by Python for dictionaries and sets). For realistic numbers of strings (up to about 10**15), the probability of a collision between two 128-bit hash values is negligible.

in1d(test: arkouda.strings.Strings | Categorical) arkouda.pdarrayclass.pdarray[source]

Test whether each element of the Categorical object is also present in the test Strings or Categorical object.

Returns a boolean array the same length as self that is True where an element of self is in test and False otherwise.

Parameters:

test (Union[Strings,Categorical]) – The values against which to test each value of ‘self`.

Returns:

The values self[in1d] are in the test Strings or Categorical object.

Return type:

pdarray, bool

Raises:

TypeError – Raised if test is not a Strings or Categorical object

Notes

in1d can be considered as an element-wise function version of the python keyword in, for 1-D sequences. in1d(a, b) is logically equivalent to ak.array([item in b for item in a]), but is much faster and scales to arbitrarily large a.

Examples

>>> strings = ak.array([f'String {i}' for i in range(0,5)])
>>> cat = ak.Categorical(strings)
>>> ak.in1d(cat,strings)
array([True, True, True, True, True])
>>> strings = ak.array([f'String {i}' for i in range(5,9)])
>>> catTwo = ak.Categorical(strings)
>>> ak.in1d(cat,catTwo)
array([False, False, False, False, False])
info() str[source]

Returns a JSON formatted string containing information about all components of self

Parameters:

None

Returns:

JSON string containing information about all components of self

Return type:

str

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

isna()[source]

Find where values are missing or null (as defined by self.NAvalue)

static parse_hdf_categoricals(d: Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings]) Tuple[List[str], Dict[str, Categorical]][source]

This function should be used in conjunction with the load_all function which reads hdf5 files and reconstitutes Categorical objects. Categorical objects use a naming convention and HDF5 structure so they can be identified and constructed for the user.

In general you should not call this method directly

Parameters:

d (Dictionary of String to either Pdarray or Strings object)

Returns:

  • 2-Tuple of List of strings containing key names which should be removed and Dictionary of

  • base name to Categorical object

pretty_print_info() None[source]

Prints information about all components of self in a human readable format

Parameters:

None

Return type:

None

register(user_defined_name: str) Categorical[source]

Register this Categorical object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the Categorical is to be registered under, this will be the root name for underlying components

Returns:

The same Categorical which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Categoricals with the same name.

Return type:

Categorical

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the Categorical with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

reset_categories() Categorical[source]

Recompute the category labels, discarding any unused labels. This method is often useful after slicing or indexing a Categorical array, when the resulting array only contains a subset of the original categories. In this case, eliminating unused categories can speed up other operations.

Returns:

A Categorical object generated from the current instance

Return type:

Categorical

save(prefix_path: str, dataset: str = 'categorical_array', file_format: str = 'HDF5', mode: str = 'truncate', file_type: str = 'distribute', compression: str | None = None) str[source]

DEPRECATED Save the Categorical object to HDF5 or Parquet. The result is a collection of HDF5/Parquet files, one file per locale of the arkouda server, where each filename starts with prefix_path and dataset. Each locale saves its chunk of the Strings array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in HDF5 files (must not already exist) :type dataset: str :param file_format: The format to save the file to. :type file_format: str {‘HDF5 | ‘Parquet’} :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, create a new Categorical dataset within existing files.

Parameters:
  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

  • compression (str (Optional)) – {None | ‘snappy’ | ‘gzip’ | ‘brotli’ | ‘zstd’ | ‘lz4’} The compression type to use when writing. This is only supported for Parquet files and will not be used with HDF5.

Return type:

String message indicating result of save operation

Raises:
  • ValueError – Raised if the lengths of columns and values differ, or the mode is neither ‘truncate’ nor ‘append’

  • TypeError – Raised if prefix_path, dataset, or mode is not a str

Notes

Important implementation notes: (1) Strings state is saved as two datasets within an hdf5 group: one for the string characters and one for the segments corresponding to the start of each string, (2) the hdf5 group is named via the dataset parameter.

See also

-, -

set_categories(new_categories, NAvalue=None)[source]

Set categories to user-defined values.

Parameters:
  • new_categories (Strings) – The array of new categories to use. Must be unique.

  • NAvalue (str scalar) – The value to use to represent missing/null data

Returns:

A new Categorical with the user-defined categories. Old values present in new categories will appear unchanged. Old values not present will be assigned the NA value.

Return type:

Categorical

sort()[source]
classmethod standardize_categories(arrays, NAvalue='N/A')[source]

Standardize an array of Categoricals so that they share the same categories.

Parameters:
  • arrays (sequence of Categoricals) – The Categoricals to standardize

  • NAvalue (str scalar) – The value to use to represent missing/null data

Returns:

A list of the original Categoricals remapped to the shared categories.

Return type:

List of Categoricals

startswith(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element starts with the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that start with substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

to_hdf(prefix_path, dataset='categorical_array', mode='truncate', file_type='distribute')[source]

Save the Categorical to HDF5. The result is a collection of HDF5 files, one file per locale of the arkouda server, where each filename starts with prefix_path.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files will share

  • dataset (str) – Name prefix for saved data within the HDF5 file

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, add data as a new column to existing files.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale.

Return type:

None

See also

load

to_list() List[source]

Convert the Categorical to a list, transferring data from the arkouda server to Python. This conversion discards category information and produces a list of strings. If the arrays exceeds a built-in size limit, a RuntimeError is raised.

Returns:

A list of strings corresponding to the values in this Categorical

Return type:

list

Notes

The number of bytes in the Categorical cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but proceed with caution.

to_ndarray() numpy.ndarray[source]

Convert the array to a np.ndarray, transferring array data from the arkouda server to Python. This conversion discards category information and produces an ndarray of strings. If the arrays exceeds a built-in size limit, a RuntimeError is raised.

Returns:

A numpy ndarray of strings corresponding to the values in this array

Return type:

np.ndarray

Notes

The number of bytes in the array cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but proceed with caution.

to_parquet(prefix_path: str, dataset: str = 'categorical_array', mode: str = 'truncate', compression: str | None = None) str[source]

This functionality is currently not supported and will also raise a RuntimeError. Support is in development. Save the Categorical to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in HDF5 files (must not already exist)

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, create a new Categorical dataset within existing files.

  • compression (str (Optional)) – Default None Provide the compression type to use when writing the file. Supported values: snappy, gzip, brotli, zstd, lz4

Return type:

String message indicating result of save operation

Raises:

RuntimeError – On run due to compatability issues of Categorical with Parquet.

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - ‘append’ write mode is supported, but is not efficient. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

See also

to_hdf

to_strings() List[source]

Convert the Categorical to Strings.

Returns:

A Strings object corresponding to the values in this Categorical.

Return type:

arkouda.strings.Strings

Examples

>>> from arkouda import ak
>>> ak.connect()
>>> a = ak.array(["a","b","c"])
>>> a
>>> c = ak.Categorical(a)
>>>  c.to_strings()
c.to_strings()
>>> isinstance(c.to_strings(), ak.Strings)
True
transfer(hostname: str, port: arkouda.dtypes.int_scalars)[source]

Sends a Categorical object to a different Arkouda server

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the Categorical is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Return type:

A message indicating a complete transfer

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unique() Categorical[source]
unregister() None[source]

Unregister this Categorical object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

static unregister_categorical_by_name(user_defined_name: str) None[source]

Function to unregister Categorical object by name which was registered with the arkouda server via register()

Parameters:

user_defined_name (str) – Name under which the Categorical object was registered

Raises:
  • TypeError – if user_defined_name is not a string

  • RegistrationError – if there is an issue attempting to unregister any underlying components

update_hdf(prefix_path, dataset='categorical_array', repack=True)[source]

Overwrite the dataset with the name provided with this Categorical object. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the Categorical

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

  • Because HDF5 deletes do not release memory, the repack option allows for automatic creation of a file without the inaccessible data.

arkouda.DTypeObjects
arkouda.DTypes
class arkouda.DataFrame(initialdata=None, index=None, columns=None)[source]

Bases: collections.UserDict

A DataFrame structure based on arkouda arrays.

Parameters:
  • initialdata (List or dictionary of lists, tuples, or pdarrays) – Each list/dictionary entry corresponds to one column of the data and should be a homogenous type. Different columns may have different types. If using a dictionary, keys should be strings.

  • index (Index, pdarray, or Strings) – Index for the resulting frame. Defaults to an integer range.

  • columns (List, tuple, pdarray, or Strings) – Column labels to use if the data does not include them. Elements must be strings. Defaults to an stringified integer range.

Examples

Create an empty DataFrame and add a column of data:

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame()
>>> df['a'] = ak.array([1,2,3])
>>> display(df)

a

0

1

1

2

2

3

Create a new DataFrame using a dictionary of data:

>>> userName = ak.array(['Alice', 'Bob', 'Alice', 'Carol', 'Bob', 'Alice'])
>>> userID = ak.array([111, 222, 111, 333, 222, 111])
>>> item = ak.array([0, 0, 1, 1, 2, 0])
>>> day = ak.array([5, 5, 6, 5, 6, 6])
>>> amount = ak.array([0.5, 0.6, 1.1, 1.2, 4.3, 0.6])
>>> df = ak.DataFrame({'userName': userName, 'userID': userID,
>>>            'item': item, 'day': day, 'amount': amount})
>>> display(df)

userName

userID

item

day

amount

0

Alice

111

0

5

0.5

1

Bob

222

0

5

0.6

2

Alice

111

1

6

1.1

3

Carol

333

1

5

1.2

4

Bob

222

2

6

4.3

5

Alice

111

0

6

0.6

Indexing works slightly differently than with pandas:

>>> df[0]

keys

values

userName

Alice

userID

111

item

0

day

5

amount

0.5

>>> df['userID']
array([111, 222, 111, 333, 222, 111])
>>> df['userName']
array(['Alice', 'Bob', 'Alice', 'Carol', 'Bob', 'Alice'])
>>> df[ak.array([1,3,5])]

userName

userID

item

day

amount

0

Bob

222

0

5

0.6

1

Carol

333

1

5

1.2

2

Alice

111

0

6

0.6

Compute the stride:

>>> df[1:5:1]

userName

userID

item

day

amount

0

Bob

222

0

5

0.6

1

Alice

111

1

6

1.1

2

Carol

333

1

5

1.2

3

Bob

222

2

6

4.3

>>> df[ak.array([1,2,3])]

userName

userID

item

day

amount

0

Bob

222

0

5

0.6

1

Alice

111

1

6

1.1

2

Carol

333

1

5

1.2

>>> df[['userID', 'day']]

userID

day

0

111

5

1

222

5

2

111

6

3

333

5

4

222

6

5

111

6

property columns

An Index where the values are the column names of the dataframe.

Returns:

The values of the index are the column names of the dataframe.

Return type:

arkouda.index.Index

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df

col1

col2

0

1

3

1

2

4

>>> df.columns
Index(array(['col1', 'col2']), dtype='<U0')
property dtypes

The dtypes of the dataframe.

Returns:

dtypes – The dtypes of the dataframe.

Return type:

arkouda.row.Row

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': ["a", "b"]})
>>> df

col1

col2

0

1

a

1

2

b

>>> df.dtypes

keys

values

col1

int64

col2

str

property empty

Whether the dataframe is empty.

Returns:

True if the dataframe is empty, otherwise False.

Return type:

bool

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({})
>>> df
 0 rows x 0 columns
>>> df.empty
True
property index

The index of the dataframe.

Returns:

The index of the dataframe.

Return type:

arkouda.index.Index or arkouda.index.MultiIndex

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df

col1

col2

0

1

3

1

2

4

>>> df.index
Index(array([0 1]), dtype='int64')
property info

Returns a summary string of this dataframe.

Returns:

A summary string of this dataframe.

Return type:

str

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': ["a", "b"]})
>>> df

col1

col2

0

1

a

1

2

b

>>> df.info
"DataFrame(['col1', 'col2'], 2 rows, 20 B)"
property shape

The shape of the dataframe.

Returns:

Tuple of array dimensions.

Return type:

tuple of int

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df

col1

col2

0

1

4

1

2

5

2

3

6

>>> df.shape
(3, 2)
property size

Returns the number of bytes on the arkouda server.

Returns:

The number of bytes on the arkouda server.

Return type:

int

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df

col1

col2

0

1

4

1

2

5

2

3

6

>>> df.size
6
objType = 'DataFrame'
GroupBy(keys, use_series=False, as_index=True, dropna=True)[source]

Group the dataframe by a column or a list of columns.

Parameters:
  • keys (str or list of str) – An (ordered) list of column names or a single string to group by.

  • use_series (bool, default=False) – If True, returns an arkouda.dataframe.GroupBy object. Otherwise an arkouda.groupbyclass.GroupBy object.

  • as_index (bool, default=True) – If True, groupby columns will be set as index otherwise, the groupby columns will be treated as DataFrame columns.

  • dropna (bool, default=True) – If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Returns:

If use_series = True, returns an arkouda.dataframe.GroupBy object. Otherwise returns an arkouda.groupbyclass.GroupBy object.

Return type:

arkouda.dataframe.GroupBy or arkouda.groupbyclass.GroupBy

See also

arkouda.GroupBy

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1.0, 1.0, 2.0, np.nan], 'col2': [4, 5, 6, 7]})
>>> df

col1

col2

0

1

4

1

1

5

2

2

6

3

nan

7

>>> df.GroupBy("col1")
<arkouda.groupbyclass.GroupBy at 0x7f2cf23e10c0>
>>> df.GroupBy("col1").size()
(array([1.00000000000000000 2.00000000000000000]), array([2 1]))
>>> df.GroupBy("col1",use_series=True)
col1
1.0    2
2.0    1
dtype: int64
>>> df.GroupBy("col1",use_series=True, as_index = False).size()

col1

size

0

1

2

1

2

1

all(axis=0) arkouda.series.Series | bool[source]

Return whether all elements are True, potentially over an axis.

Returns True unless there at least one element along a Dataframe axis that is False.

Currently, will ignore any columns that are not type bool. This is equivalent to the pandas option bool_only=True.

Parameters:

axis ({0 or ‘index’, 1 or ‘columns’, None}, default = 0) –

Indicate which axis or axes should be reduced.

0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.

1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.

None : reduce all axes, return a scalar.

Return type:

arkouda.series.Series or bool

Raises:

ValueError – Raised if axis does not have a value in {0 or ‘index’, 1 or ‘columns’, None}.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"A":[True,True,True,False],"B":[True,True,True,False],
...          "C":[True,False,True,False],"D":[True,True,True,True]})

A

B

C

D

0

True

True

True

True

1

True

True

False

True

2

True

True

True

True

3

False

False

False

True

>>> df.all(axis=0)
A    False
B    False
C    False
D     True
dtype: bool
>>> df.all(axis=1)
0     True
1    False
2     True
3    False
dtype: bool
>>> df.all(axis=None)
False
any(axis=0) arkouda.series.Series | bool[source]

Return whether any element is True, potentially over an axis.

Returns False unless there is at least one element along a Dataframe axis that is True.

Currently, will ignore any columns that are not type bool. This is equivalent to the pandas option bool_only=True.

Parameters:

axis ({0 or ‘index’, 1 or ‘columns’, None}, default = 0) –

Indicate which axis or axes should be reduced.

0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.

1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.

None : reduce all axes, return a scalar.

Return type:

arkouda.series.Series or bool

Raises:

ValueError – Raised if axis does not have a value in {0 or ‘index’, 1 or ‘columns’, None}.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"A":[True,True,True,False],"B":[True,True,True,False],
...          "C":[True,False,True,False],"D":[False,False,False,False]})

A

B

C

D

0

True

True

True

False

1

True

True

False

False

2

True

True

True

False

3

False

False

False

False

>>> df.any(axis=0)
A     True
B     True
C     True
D    False
dtype: bool
>>> df.any(axis=1)
0     True
1     True
2     True
3    False
dtype: bool
>>> df.any(axis=None)
True
append(other, ordered=True)[source]

Concatenate data from ‘other’ onto the end of this DataFrame, in place.

Explicitly, use the arkouda concatenate function to append the data from each column in other to the end of self. This operation is done in place, in the sense that the underlying pdarrays are updated from the result of the arkouda concatenate function, rather than returning a new DataFrame object containing the result.

Parameters:
  • other (DataFrame) – The DataFrame object whose data will be appended to this DataFrame.

  • ordered (bool, default=True) – If False, allow rows to be interleaved for better performance (but data within a row remains together). By default, append all rows to the end, in input order.

Returns:

Appending occurs in-place, but result is returned for compatibility.

Return type:

self

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df1 = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})

col1

col2

0

1

3

1

2

4

>>> df2 = ak.DataFrame({'col1': [3], 'col2': [5]})

col1

col2

0

3

5

>>> df1.append(df2)
>>> df1

col1

col2

0

1

3

1

2

4

2

3

5

apply_permutation(perm)[source]

Apply a permutation to an entire DataFrame. The operation is done in place and the original DataFrame will be modified.

This may be useful if you want to unsort an DataFrame, or even to apply an arbitrary permutation such as the inverse of a sorting permutation.

Parameters:

perm (pdarray) – A permutation array. Should be the same size as the data arrays, and should consist of the integers [0,size-1] in some order. Very minimal testing is done to ensure this is a permutation.

Return type:

None

See also

sort

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})

col1

col2

0

1

4

1

2

5

2

3

6

>>> perm_arry = ak.array([0, 2, 1])
>>> df.apply_permutation(perm_arry)
>>> display(df)

col1

col2

0

1

4

1

3

6

2

2

5

argsort(key, ascending=True)[source]

Return the permutation that sorts the dataframe by key.

Parameters:
  • key (str) – The key to sort on.

  • ascending (bool, default = True) – If true, sort the key in ascending order. Otherwise, sort the key in descending order.

Returns:

The permutation array that sorts the data on key.

Return type:

arkouda.pdarrayclass.pdarray

See also

coargsort

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1.1, 3.1, 2.1], 'col2': [6, 5, 4]})
>>> display(df)

col1

col2

0

1.1

6

1

3.1

5

2

2.1

4

>>> df.argsort('col1')
array([0 2 1])
>>> sorted_df1 = df[df.argsort('col1')]
>>> display(sorted_df1)

col1

col2

0

1.1

6

1

2.1

4

2

3.1

5

>>> df.argsort('col2')
array([2 1 0])
>>> sorted_df2 = df[df.argsort('col2')]
>>> display(sorted_df2)

col1

col2

0

2.1

4

1

3.1

5

2

1.1

6

static attach(user_defined_name: str) DataFrame[source]

Function to return a DataFrame object attached to the registered name in the arkouda server which was registered using register().

Parameters:

user_defined_name (str) – user defined name which DataFrame object was registered under.

Returns:

The DataFrame object created by re-attaching to the corresponding server components.

Return type:

arkouda.dataframe.DataFrame

Raises:

RegistrationError – if user_defined_name is not registered

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
coargsort(keys, ascending=True)[source]

Return the permutation that sorts the dataframe by keys.

Note: Sorting using Strings may not yield correct sort order.

Parameters:

keys (list of str) – The keys to sort on.

Returns:

The permutation array that sorts the data on keys.

Return type:

arkouda.pdarrayclass.pdarray

Example

>>> df = ak.DataFrame({'col1': [2, 2, 1], 'col2': [3, 4, 3], 'col3':[5, 6, 7]})
>>> display(df)

col1

col2

col3

0

2

3

5

1

2

4

6

2

1

3

7

>>> df.coargsort(['col1', 'col2'])
array([2 0 1])
>>>
classmethod concat(items, ordered=True)[source]

Essentially an append, but different formatting.

copy(deep=True)[source]

Make a copy of this object’s data.

When deep = True (default), a new object will be created with a copy of the calling object’s data. Modifications to the data of the copy will not be reflected in the original object.

When deep = False a new object will be created without copying the calling object’s data. Any changes to the data of the original object will be reflected in the shallow copy, and vice versa.

Parameters:

deep (bool, default=True) – When True, return a deep copy. Otherwise, return a shallow copy.

Returns:

A deep or shallow copy according to caller specification.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> display(df)

col1

col2

0

1

3

1

2

4

>>> df_deep = df.copy(deep=True)
>>> df_deep['col1'] +=1
>>> display(df)

col1

col2

0

1

3

1

2

4

>>> df_shallow = df.copy(deep=False)
>>> df_shallow['col1'] +=1
>>> display(df)

col1

col2

0

2

3

1

3

4

corr() DataFrame[source]

Return new DataFrame with pairwise correlation of columns.

Returns:

Arkouda DataFrame containing correlation matrix of all columns.

Return type:

arkouda.dataframe.DataFrame

Raises:

RuntimeError – Raised if there’s a server-side error thrown.

See also

pdarray.corr

Notes

Generates the correlation matrix using Pearson R for all columns.

Attempts to convert to numeric values where possible for inclusion in the matrix.

Example

>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [-1, -2]})
>>> display(df)

col1

col2

0

1

-1

1

2

-2

>>> corr = df.corr()

col1

col2

col1

1

-1

col2

-1

1

count(axis: int | str = 0, numeric_only=False) arkouda.series.Series[source]

Count non-NA cells for each column or row.

The values np.NaN are considered NA.

Parameters:
  • axis ({0 or 'index', 1 or 'columns'}, default 0) – If 0 or ‘index’ counts are generated for each column. If 1 or ‘columns’ counts are generated for each row.

  • numeric_only (bool = False) – Include only float, int or boolean data.

Returns:

For each column/row the number of non-NA/null entries.

Return type:

arkouda.series.Series

Raises:

ValueError – Raised if axis is not 0, 1, ‘index’, or ‘columns’.

See also

GroupBy.count

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame({'col_A': ak.array([7, np.nan]), 'col_B':ak.array([1, 9])})
>>> display(df)

col_A

col_B

0

7

1

1

nan

9

>>> df.count()
col_A    1
col_B    2
dtype: int64
>>> df = ak.DataFrame({'col_A': ak.array(["a","b","c"]), 'col_B':ak.array([1, np.nan, np.nan])})
>>> display(df)

col_A

col_B

0

a

1

1

b

nan

2

c

nan

>>> df.count()
col_A    3
col_B    1
dtype: int64
>>> df.count(numeric_only=True)
col_B    1
dtype: int64
>>> df.count(axis=1)
0    2
1    1
2    1
dtype: int64
drop(keys: str | int | List[str | int], axis: str | int = 0, inplace: bool = False) None | DataFrame[source]

Drop column/s or row/s from the dataframe.

Parameters:
  • keys (str, int or list) – The labels to be dropped on the given axis.

  • axis (int or str) – The axis on which to drop from. 0/’index’ - drop rows, 1/’columns’ - drop columns.

  • inplace (bool, default=False) – When True, perform the operation on the calling object. When False, return a new object.

Returns:

DateFrame when inplace=False; None when inplace=True

Return type:

arkouda.dataframe.DataFrame or None

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> display(df)

col1

col2

0

1

3

1

2

4

Drop column

>>> df.drop('col1', axis = 1)

col2

0

3

1

4

Drop row

>>> df.drop(0, axis = 0)

col1

col2

0

2

4

drop_duplicates(subset=None, keep='first')[source]

Drops duplcated rows and returns resulting DataFrame.

If a subset of the columns are provided then only one instance of each duplicated row will be returned (keep determines which row).

Parameters:
  • subset (Iterable) – Iterable of column names to use to dedupe.

  • keep ({'first', 'last'}, default='first') – Determines which duplicates (if any) to keep.

Returns:

DataFrame with duplicates removed.

Return type:

arkouda.dataframe.DataFrame

Example

>>> df = ak.DataFrame({'col1': [1, 2, 2, 3], 'col2': [4, 5, 5, 6]})
>>> display(df)

col1

col2

0

1

4

1

2

5

2

2

5

3

3

6

>>> df.drop_duplicates()

col1

col2

0

1

4

1

2

5

2

3

6

dropna(axis: int | str = 0, how: str | None = None, thresh: int | None = None, ignore_index: bool = False) DataFrame[source]

Remove missing values.

Parameters:
  • axis ({0 or 'index', 1 or 'columns'}, default = 0) –

    Determine if rows or columns which contain missing values are removed.

    0, or ‘index’: Drop rows which contain missing values.

    1, or ‘columns’: Drop columns which contain missing value.

    Only a single axis is allowed.

  • how ({'any', 'all'}, default='any') –

    Determine if row or column is removed from DataFrame, when we have at least one NA or all NA.

    ’any’: If any NA values are present, drop that row or column.

    ’all’: If all values are NA, drop that row or column.

  • thresh (int, optional) – Require that many non - NA values.Cannot be combined with how.

  • ignore_index (bool, default False) – If True, the resulting axis will be labeled 0, 1, …, n - 1.

Returns:

DataFrame with NA entries dropped from it.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame(
    {
        "A": [True, True, True, True],
        "B": [1, np.nan, 2, np.nan],
        "C": [1, 2, 3, np.nan],
        "D": [False, False, False, False],
        "E": [1, 2, 3, 4],
        "F": ["a", "b", "c", "d"],
        "G": [1, 2, 3, 4],
    }
   )
>>> display(df)

A

B

C

D

E

F

G

0

True

1

1

False

1

a

1

1

True

nan

2

False

2

b

2

2

True

2

3

False

3

c

3

3

True

nan

nan

False

4

d

4

>>> df.dropna()

A

B

C

D

E

F

G

0

True

1

1

False

1

a

1

1

True

2

3

False

3

c

3

>>> df.dropna(axis=1)

A

D

E

F

G

0

True

False

1

a

1

1

True

False

2

b

2

2

True

False

3

c

3

3

True

False

4

d

4

>>> df.dropna(axis=1, thresh=3)

A

C

D

E

F

G

0

True

1

False

1

a

1

1

True

2

False

2

b

2

2

True

3

False

3

c

3

3

True

nan

False

4

d

4

>>> df.dropna(axis=1, how="all")

A

B

C

D

E

F

G

0

True

1

1

False

1

a

1

1

True

nan

2

False

2

b

2

2

True

2

3

False

3

c

3

3

True

nan

nan

False

4

d

4

filter_by_range(keys, low=1, high=None)[source]

Find all rows where the value count of the items in a given set of columns (keys) is within the range [low, high].

To filter by a specific value, set low == high.

Parameters:
  • keys (str or list of str) – The names of the columns to group by.

  • low (int, default=1) – The lowest value count.

  • high (int, default=None) – The highest value count, default to unlimited.

Returns:

An array of boolean values for qualified rows in this DataFrame.

Return type:

arkouda.pdarrayclass.pdarray

Example

>>> df = ak.DataFrame({'col1': [1, 2, 2, 2, 3, 3], 'col2': [4, 5, 6, 7, 8, 9]})
>>> display(df)

col1

col2

0

1

4

1

2

5

2

2

6

3

2

7

4

3

8

5

3

9

>>> df.filter_by_range("col1", low=1, high=2)
array([True False False False True True])
>>> filtered_df = df[df.filter_by_range("col1", low=1, high=2)]
>>> display(filtered_df)

col1

col2

0

1

4

1

3

8

2

3

9

classmethod from_pandas(pd_df)[source]

Copy the data from a pandas DataFrame into a new arkouda.dataframe.DataFrame.

Parameters:

pd_df (pandas.DataFrame) – A pandas DataFrame to convert.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import pandas as pd
>>> pd_df = pd.DataFrame({"A":[1,2],"B":[3,4]})
>>> type(pd_df)
pandas.core.frame.DataFrame
>>> display(pd_df)

A

B

0

1

3

1

2

4

>>> ak_df = DataFrame.from_pandas(pd_df)
>>> type(ak_df)
arkouda.dataframe.DataFrame
>>> display(ak_df)

A

B

0

1

3

1

2

4

classmethod from_return_msg(rep_msg)[source]

Creates a DataFrame object from an arkouda server response message.

Parameters:

rep_msg (string) – Server response message used to create a DataFrame.

Return type:

arkouda.dataframe.DataFrame

groupby(keys, use_series=True, as_index=True, dropna=True)[source]

Group the dataframe by a column or a list of columns. Alias for GroupBy.

Parameters:
  • keys (str or list of str) – An (ordered) list of column names or a single string to group by.

  • use_series (bool, default=True) – If True, returns an arkouda.dataframe.GroupBy object. Otherwise an arkouda.groupbyclass.GroupBy object.

  • as_index (bool, default=True) – If True, groupby columns will be set as index otherwise, the groupby columns will be treated as DataFrame columns.

  • dropna (bool, default=True) – If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Returns:

If use_series = True, returns an arkouda.dataframe.GroupBy object. Otherwise returns an arkouda.groupbyclass.GroupBy object.

Return type:

arkouda.dataframe.GroupBy or arkouda.groupbyclass.GroupBy

See also

arkouda.GroupBy

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1.0, 1.0, 2.0, np.nan], 'col2': [4, 5, 6, 7]})
>>> df

col1

col2

0

1

4

1

1

5

2

2

6

3

nan

7

>>> df.GroupBy("col1")
<arkouda.groupbyclass.GroupBy at 0x7f2cf23e10c0>
>>> df.GroupBy("col1").size()
(array([1.00000000000000000 2.00000000000000000]), array([2 1]))
>>> df.GroupBy("col1",use_series=True)
col1
1.0    2
2.0    1
dtype: int64
>>> df.GroupBy("col1",use_series=True, as_index = False).size()

col1

size

0

1

2

1

2

1

head(n=5)[source]

Return the first n rows.

This function returns the first n rows of the the dataframe. It is useful for quickly verifying data, for example, after sorting or appending rows.

Parameters:

n (int, default = 5) – Number of rows to select.

Returns:

The first n rows of the DataFrame.

Return type:

arkouda.dataframe.DataFrame

See also

tail

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': ak.arange(10), 'col2': -1 * ak.arange(10)})
>>> display(df)

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

5

5

-5

6

6

-6

7

7

-7

8

8

-8

9

9

-9

>>> df.head()

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

>>> df.head(n=2)

col1

col2

0

0

0

1

1

-1

is_registered() bool[source]

Return True if the object is contained in the registry.

Returns:

Indicates if the object is contained in the registry.

Return type:

bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mismatch of registered components.

Notes

Objects registered with the server are immune to deletion until they are unregistered.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
isin(values: arkouda.pdarrayclass.pdarray | Dict | arkouda.series.Series | DataFrame) DataFrame[source]

Determine whether each element in the DataFrame is contained in values.

Parameters:

values (pdarray, dict, Series, or DataFrame) – The values to check for in DataFrame. Series can only have a single index.

Returns:

Arkouda DataFrame of booleans showing whether each element in the DataFrame is contained in values.

Return type:

arkouda.dataframe.DataFrame

See also

ak.Series.isin

Notes

  • Pandas supports values being an iterable type. In arkouda, we replace this with pdarray.

  • Pandas supports ~ operations. Currently, ak.DataFrame does not support this.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col_A': ak.array([7, 3]), 'col_B':ak.array([1, 9])})
>>> display(df)

col_A

col_B

0

7

1

1

3

9

When values is a pdarray, check every value in the DataFrame to determine if it exists in values.

>>> df.isin(ak.array([0, 1]))

col_A

col_B

0

0

1

1

0

0

When values is a dict, the values in the dict are passed to check the column indicated by the key.

>>> df.isin({'col_A': ak.array([0, 3])})

col_A

col_B

0

0

0

1

1

0

When values is a Series, each column is checked if values is present positionally. This means that for True to be returned, the indexes must be the same.

>>> i = ak.Index(ak.arange(2))
>>> s = ak.Series(data=[3, 9], index=i)
>>> df.isin(s)

col_A

col_B

0

0

0

1

0

1

When values is a DataFrame, the index and column must match. Note that 9 is not found because the column name does not match.

>>> other_df = ak.DataFrame({'col_A':ak.array([7, 3]), 'col_C':ak.array([0, 9])})
>>> df.isin(other_df)

col_A

col_B

0

1

0

1

1

0

isna() DataFrame[source]

Detect missing values.

Return a boolean same-sized object indicating if the values are NA. numpy.NaN values get mapped to True values. Everything else gets mapped to False values.

Returns:

Mask of bool values for each element in DataFrame that indicates whether an element is an NA value.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame({"A": [np.nan, 2, 2, 3], "B": [3, np.nan, 5, 6],
...          "C": [1, np.nan, 2, np.nan], "D":["a","b","c","d"]})
>>> display(df)

A

B

C

D

0

nan

3

1

a

1

2

nan

nan

b

2

2

5

2

c

3

3

6

nan

d

>>> df.isna()
       A      B      C      D
0   True  False  False  False
1  False   True   True  False
2  False  False  False  False
3  False  False   True  False (4 rows x 4 columns)
classmethod load(prefix_path, file_format='INFER')[source]

Load dataframe from file. file_format needed for consistency with other load functions.

Parameters:
  • prefix_path (str) – The prefix path for the data.

  • file_format (string, default = "INFER")

Returns:

A dataframe loaded from the prefix_path.

Return type:

arkouda.dataframe.DataFrame

Examples

To store data in <my_dir>/my_data_LOCALE0000, use “<my_dir>/my_data” as the prefix.

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf5_output','my_data')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A": ak.arange(5), "B": -1 * ak.arange(5)})
>>> df.save(my_path, file_type="distribute")
>>> df.load(my_path)

A

B

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

memory_usage(index=True, unit='B') arkouda.series.Series[source]

Return the memory usage of each column in bytes.

The memory usage can optionally include the contribution of the index.

Parameters:
  • index (bool, default True) – Specifies whether to include the memory usage of the DataFrame’s index in returned Series. If index=True, the memory usage of the index is the first item in the output.

  • unit (str, default = "B") – Unit to return. One of {‘B’, ‘KB’, ‘MB’, ‘GB’}.

Returns:

A Series whose index is the original column names and whose values is the memory usage of each column in bytes.

Return type:

Series

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> dtypes = [ak.int64, ak.float64,  ak.bool]
>>> data = dict([(str(t), ak.ones(5000, dtype=ak.int64).astype(t)) for t in dtypes])
>>> df = ak.DataFrame(data)
>>> display(df.head())

int64

float64

bool

0

1

1

True

1

1

1

True

2

1

1

True

3

1

1

True

4

1

1

True

>>> df.memory_usage()

0

Index

40000

int64

40000

float64

40000

bool

5000

>>> df.memory_usage(index=False)

0

int64

40000

float64

40000

bool

5000

>>> df.memory_usage(unit="KB")

0

Index

39.0625

int64

39.0625

float64

39.0625

bool

4.88281

To get the approximate total memory usage:

>>>  df.memory_usage(index=True).sum()
memory_usage_info(unit='GB')[source]

A formatted string representation of the size of this DataFrame.

Parameters:

unit (str, default = "GB") – Unit to return. One of {‘KB’, ‘MB’, ‘GB’}.

Returns:

A string representation of the number of bytes used by this DataFrame in [unit]s.

Return type:

str

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': ak.arange(1000), 'col2': ak.arange(1000)})
>>> df.memory_usage_info()
'0.00 GB'
>>> df.memory_usage_info(unit="KB")
'15 KB'
merge(right: DataFrame, on: str | List[str] | None = None, how: str = 'inner', left_suffix: str = '_x', right_suffix: str = '_y', convert_ints: bool = True, sort: bool = True) DataFrame[source]

Merge Arkouda DataFrames with a database-style join. The resulting dataframe contains rows from both DataFrames as specified by the merge condition (based on the “how” and “on” parameters).

Based on pandas merge functionality. https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html

Parameters:
  • right (DataFrame) – The Right DataFrame to be joined.

  • on (Optional[Union[str, List[str]]] = None) – The name or list of names of the DataFrame column(s) to join on. If on is None, this defaults to the intersection of the columns in both DataFrames.

  • how ({"inner", "left", "right}, default = "inner") – The merge condition. Must be “inner”, “left”, or “right”.

  • left_suffix (str, default = "_x") – A string indicating the suffix to add to columns from the left dataframe for overlapping column names in both left and right. Defaults to “_x”. Only used when how is “inner”.

  • right_suffix (str, default = "_y") – A string indicating the suffix to add to columns from the right dataframe for overlapping column names in both left and right. Defaults to “_y”. Only used when how is “inner”.

  • convert_ints (bool = True) – If True, convert columns with missing int values (due to the join) to float64. This is to match pandas. If False, do not convert the column dtypes. This has no effect when how = “inner”.

  • sort (bool = True) – If True, DataFrame is returned sorted by “on”. Otherwise, the DataFrame is not sorted.

Returns:

Joined Arkouda DataFrame.

Return type:

arkouda.dataframe.DataFrame

Note

Multiple column joins are only supported for integer columns.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> left_df = ak.DataFrame({'col1': ak.arange(5), 'col2': -1 * ak.arange(5)})
>>> display(left_df)

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

>>> right_df = ak.DataFrame({'col1': 2 * ak.arange(5), 'col2': 2 * ak.arange(5)})
>>> display(right_df)

col1

col2

0

0

0

1

2

2

2

4

4

3

6

6

4

8

8

>>> left_df.merge(right_df, on = "col1")

col1

col2_x

col2_y

0

0

0

0

1

2

-2

2

2

4

-4

4

>>> left_df.merge(right_df, on = "col1", how = "left")

col1

col2_y

col2_x

0

0

0

0

1

1

nan

-1

2

2

2

-2

3

3

nan

-3

4

4

4

-4

>>> left_df.merge(right_df, on = "col1", how = "right")

col1

col2_x

col2_y

0

0

0

0

1

2

-2

2

2

4

-4

4

3

6

nan

6

4

8

nan

8

>>> left_df.merge(right_df, on = "col1", how = "outer")

col1

col2_y

col2_x

0

0

0

0

1

1

nan

-1

2

2

2

-2

3

3

nan

-3

4

4

4

-4

5

6

6

nan

6

8

8

nan

notna() DataFrame[source]

Detect existing (non-missing) values.

Return a boolean same-sized object indicating if the values are not NA. numpy.NaN values get mapped to False values.

Returns:

Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame({"A": [np.nan, 2, 2, 3], "B": [3, np.nan, 5, 6],
...          "C": [1, np.nan, 2, np.nan], "D":["a","b","c","d"]})
>>> display(df)

A

B

C

D

0

nan

3

1

a

1

2

nan

nan

b

2

2

5

2

c

3

3

6

nan

d

>>> df.notna()
       A      B      C     D
0  False   True   True  True
1   True  False  False  True
2   True   True   True  True
3   True   True  False  True (4 rows x 4 columns)
classmethod read_csv(filename: str, col_delim: str = ',')[source]

Read the columns of a CSV file into an Arkouda DataFrame. If the file contains the appropriately formatted header, typed data will be returned. Otherwise, all data will be returned as a Strings objects.

Parameters:
  • filename (str) – Filename to read data from.

  • col_delim (str, default=",") – The delimiter for columns within the data.

Returns:

Arkouda DataFrame containing the columns from the CSV file.

Return type:

arkouda.dataframe.DataFrame

Raises:
  • ValueError – Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist.

  • RuntimeError – Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

  • TypeError – Raised if we receive an unknown arkouda_type returned from the server.

See also

to_csv

Notes

  • CSV format is not currently supported by load/load_all operations.

  • The column delimiter is expected to be the same for column names and data.

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (”\n”) at this time.

  • Unlike other file formats, CSV files store Strings as their UTF-8 format instead of storing

bytes as uint(8).

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'csv_output','my_data')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_csv(my_path)
>>> df2 = DataFrame.read_csv(my_path + "_LOCALE0000")
>>> display(df2)

A

B

0

1

3

1

2

4

register(user_defined_name: str) DataFrame[source]

Register this DataFrame object and underlying components with the Arkouda server.

Parameters:

user_defined_name (str) – User defined name the DataFrame is to be registered under. This will be the root name for underlying components.

Returns:

The same DataFrame which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different DataFrames with the same name.

Return type:

arkouda.dataframe.DataFrame

Raises:
  • TypeError – Raised if user_defined_name is not a str.

  • RegistrationError – If the server was unable to register the DataFrame with the user_defined_name.

Notes

Objects registered with the server are immune to deletion until they are unregistered.

Any changes made to a DataFrame object after registering with the server may not be reflected in attached copies.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
rename(mapper: Callable | Dict | None = None, index: Callable | Dict | None = None, column: Callable | Dict | None = None, axis: str | int = 0, inplace: bool = False) DataFrame | None[source]

Rename indexes or columns according to a mapping.

Parameters:
  • mapper (callable or dict-like, Optional) – Function or dictionary mapping existing values to new values. Nonexistent names will not raise an error. Uses the value of axis to determine if renaming column or index

  • column (callable or dict-like, Optional) – Function or dictionary mapping existing column names to new column names. Nonexistent names will not raise an error. When this is set, axis is ignored.

  • index (callable or dict-like, Optional) – Function or dictionary mapping existing index names to new index names. Nonexistent names will not raise an error. When this is set, axis is ignored.

  • axis (int or str, default=0) – Indicates which axis to perform the rename. 0/”index” - Indexes 1/”column” - Columns

  • inplace (bool, default=False) – When True, perform the operation on the calling object. When False, return a new object.

Returns:

DateFrame when inplace=False; None when inplace=True.

Return type:

arkouda.dataframe.DataFrame or None

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"A": ak.array([1, 2, 3]), "B": ak.array([4, 5, 6])})
>>> display(df)

A

B

0

1

4

1

2

5

2

3

6

Rename columns using a mapping:

>>> df.rename(column={'A':'a', 'B':'c'})

a

c

0

1

4

1

2

5

2

3

6

Rename indexes using a mapping:

>>> df.rename(index={0:99, 2:11})

A

B

0

1

4

1

2

5

2

3

6

Rename using an axis style parameter:

>>> df.rename(str.lower, axis='column')

a

b

0

1

4

1

2

5

2

3

6

reset_index(size: int | None = None, inplace: bool = False) None | DataFrame[source]

Set the index to an integer range.

Useful if this dataframe is the result of a slice operation from another dataframe, or if you have permuted the rows and no longer need to keep that ordering on the rows.

Parameters:
  • size (int, optional) – If size is passed, do not attempt to determine size based on existing column sizes. Assume caller handles consistency correctly.

  • inplace (bool, default=False) – When True, perform the operation on the calling object. When False, return a new object.

Returns:

DateFrame when inplace=False; None when inplace=True.

Return type:

arkouda.dataframe.DataFrame or None

Note

Pandas adds a column ‘index’ to indicate the original index. Arkouda does not currently support this behavior.

Example

>>> df = ak.DataFrame({"A": ak.array([1, 2, 3]), "B": ak.array([4, 5, 6])})
>>> display(df)

A

B

0

1

4

1

2

5

2

3

6

>>> perm_df = df[ak.array([0,2,1])]
>>> display(perm_df)

A

B

0

1

4

1

3

6

2

2

5

>>> perm_df.reset_index()

A

B

0

1

4

1

3

6

2

2

5

sample(n=5)[source]

Return a random sample of n rows.

Parameters:

n (int, default=5) – Number of rows to return.

Returns:

The sampled n rows of the DataFrame.

Return type:

arkouda.dataframe.DataFrame

Example

>>> df = ak.DataFrame({"A": ak.arange(5), "B": -1 * ak.arange(5)})
>>> display(df)

A

B

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

Random output of size 3:

>>> df.sample(n=3)

A

B

0

0

0

1

1

-1

2

4

-4

save(path, index=False, columns=None, file_format='HDF5', file_type='distribute', compression: str | None = None)[source]

DEPRECATED Save DataFrame to disk, preserving column names.

Parameters:
  • path (str) – File path to save data.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (list, default=None) – List of columns to include in the file. If None, writes out all columns.

  • file_format (str, default='HDF5') – ‘HDF5’ or ‘Parquet’. Defaults to ‘HDF5’

  • file_type (str, default=distribute) – “single” or “distribute” If single, will right a single file to locale 0.

  • compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Compression type. Only used for Parquet

Notes

This method saves one file per locale of the arkouda server. All files are prefixed by the path argument and suffixed by their locale number.

See also

to_parquet, to_hdf

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf5_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A": ak.arange(5), "B": -1 * ak.arange(5)})
>>> df.save(my_path + '/my_data', file_type="single")
>>> df.load(my_path + '/my_data')

A

B

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

sort_index(ascending=True)[source]

Sort the DataFrame by indexed columns.

Note: Fails on sort order of arkouda.strings.Strings columns when multiple columns being sorted.

Parameters:

ascending (bool, default = True) – Sort values in ascending (default) or descending order.

Example

>>> df = ak.DataFrame({'col1': [1.1, 3.1, 2.1], 'col2': [6, 5, 4]},
...          index = Index(ak.array([2,0,1]), name="idx"))
>>> display(df)

idx

col1

col2

0

1.1

6

1

3.1

5

2

2.1

4

>>> df.sort_index()

idx

col1

col2

0

3.1

5

1

2.1

4

2

1.1

6

sort_values(by=None, ascending=True)[source]

Sort the DataFrame by one or more columns.

If no column is specified, all columns are used.

Note: Fails on order of arkouda.strings.Strings columns when multiple columns being sorted.

Parameters:
  • by (str or list/tuple of str, default = None) – The name(s) of the column(s) to sort by.

  • ascending (bool, default = True) – Sort values in ascending (default) or descending order.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [2, 2, 1], 'col2': [3, 4, 3], 'col3':[5, 6, 7]})
>>> display(df)

col1

col2

col3

0

2

3

5

1

2

4

6

2

1

3

7

>>> df.sort_values()

col1

col2

col3

0

1

3

7

1

2

3

5

2

2

4

6

>>> df.sort_values("col3")

col1

col2

col3

0

1

3

7

1

2

3

5

2

2

4

6

tail(n=5)[source]

Return the last n rows.

This function returns the last n rows for the dataframe. It is useful for quickly testing if your object has the right type of data in it.

Parameters:

n (int, default=5) – Number of rows to select.

Returns:

The last n rows of the DataFrame.

Return type:

arkouda.dataframe.DataFrame

See also

arkouda.dataframe.head

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': ak.arange(10), 'col2': -1 * ak.arange(10)})
>>> display(df)

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

5

5

-5

6

6

-6

7

7

-7

8

8

-8

9

9

-9

>>> df.tail()

col1

col2

0

5

-5

1

6

-6

2

7

-7

3

8

-8

4

9

-9

>>> df.tail(n=2)

col1

col2

0

8

-8

1

9

-9

to_csv(path: str, index: bool = False, columns: List[str] | None = None, col_delim: str = ',', overwrite: bool = False)[source]

Writes DataFrame to CSV file(s). File will contain a column for each column in the DataFrame. All CSV Files written by Arkouda include a header denoting data types of the columns. Unlike other file formats, CSV files store Strings as their UTF-8 format instead of storing bytes as uint(8).

Parameters:
  • path (str) – The filename prefix to be used for saving files. Files will have _LOCALE#### appended when they are written to disk.

  • index (bool, default=False) – If True, the index of the DataFrame will be written to the file as a column.

  • columns (list of str (Optional)) – Column names to assign when writing data.

  • col_delim (str, default=",") – Value to be used to separate columns within the file. Please be sure that the value used DOES NOT appear in your dataset.

  • overwrite (bool, default=False) – If True, any existing files matching your provided prefix_path will be overwritten. If False, an error will be returned if existing files are found.

Return type:

None

Raises:
  • ValueError – Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist.

  • RuntimeError – Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

  • TypeError – Raised if we receive an unknown arkouda_type returned from the server.

Notes

  • CSV format is not currently supported by load/load_all operations.

  • The column delimiter is expected to be the same for column names and data.

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (”\n”) at this time.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'csv_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_csv(my_path + "/my_data")
>>> df2 = DataFrame.read_csv(my_path + "/my_data" + "_LOCALE0000")
>>> display(df2)

A

B

0

1

3

1

2

4

to_hdf(path, index=False, columns=None, file_type='distribute')[source]

Save DataFrame to disk as hdf5, preserving column names.

Parameters:
  • path (str) – File path to save data.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (List, default = None) – List of columns to include in the file. If None, writes out all columns.

  • file_type (str (single | distribute), default=distribute) – Whether to save to a single file or distribute across Locales.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray.

Notes

This method saves one file per locale of the arkouda server. All files are prefixed by the path argument and suffixed by their locale number.

See also

to_parquet, load

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_hdf(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

A

B

0

1

3

1

2

4

to_markdown(mode='wt', index=True, tablefmt='grid', storage_options=None, **kwargs)[source]

Print DataFrame in Markdown-friendly format.

Parameters:
  • mode (str, optional) – Mode in which file is opened, “wt” by default.

  • index (bool, optional, default True) – Add index (row) labels.

  • tablefmt (str = "grid") – Table format to call from tablulate: https://pypi.org/project/tabulate/

  • storage_options (dict, optional) – Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. An error will be raised if providing this argument with a non-fsspec URL. See the fsspec and backend storage implementation docs for the set of allowed keys and values.

  • **kwargs – These parameters will be passed to tabulate.

Note

This function should only be called on small DataFrames as it calls pandas.DataFrame.to_markdown: https://pandas.pydata.org/pandas-docs/version/1.2.4/reference/api/pandas.DataFrame.to_markdown.html

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"animal_1": ["elk", "pig"], "animal_2": ["dog", "quetzal"]})
>>> print(df.to_markdown())
+----+------------+------------+
|    | animal_1   | animal_2   |
+====+============+============+
|  0 | elk        | dog        |
+----+------------+------------+
|  1 | pig        | quetzal    |
+----+------------+------------+

Suppress the index:

>>> print(df.to_markdown(index = False))
+------------+------------+
| animal_1   | animal_2   |
+============+============+
| elk        | dog        |
+------------+------------+
| pig        | quetzal    |
+------------+------------+
to_pandas(datalimit=maxTransferBytes, retain_index=False)[source]

Send this DataFrame to a pandas DataFrame.

Parameters:
  • datalimit (int, default=arkouda.client.maxTransferBytes) – The maximum number size, in megabytes to transfer. The requested DataFrame will be converted to a pandas DataFrame only if the estimated size of the DataFrame does not exceed this value.

  • retain_index (bool, default=False) – Normally, to_pandas() creates a new range index object. If you want to keep the index column, set this to True.

Returns:

The result of converting this DataFrame to a pandas DataFrame.

Return type:

pandas.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> ak_df = ak.DataFrame({"A": ak.arange(2), "B": -1 * ak.arange(2)})
>>> type(ak_df)
arkouda.dataframe.DataFrame
>>> display(ak_df)

A

B

0

0

0

1

1

-1

>>> import pandas as pd
>>> pd_df = ak_df.to_pandas()
>>> type(pd_df)
pandas.core.frame.DataFrame
>>> display(pd_df)

A

B

0

0

0

1

1

-1

to_parquet(path, index=False, columns=None, compression: str | None = None, convert_categoricals: bool = False)[source]

Save DataFrame to disk as parquet, preserving column names.

Parameters:
  • path (str) – File path to save data.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (list) – List of columns to include in the file. If None, writes out all columns.

  • compression (str (Optional), default=None) – Provide the compression type to use when writing the file. Supported values: snappy, gzip, brotli, zstd, lz4

  • convert_categoricals (bool, default=False) – Parquet requires all columns to be the same size and Categoricals don’t satisfy that requirement. If set, write the equivalent Strings in place of any Categorical columns.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

This method saves one file per locale of the arkouda server. All files are prefixed by the path argument and suffixed by their locale number.

See also

to_hdf, load

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'parquet_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_parquet(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

B

A

0

3

1

1

4

2

transfer(hostname, port)[source]

Sends a DataFrame to a different Arkouda server.

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the DataFrame is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Returns:

A message indicating a complete transfer.

Return type:

str

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unregister()[source]

Unregister this DataFrame object in the arkouda server which was previously registered using register() and/or attached to using attach().

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister.

Notes

Objects registered with the server are immune to deletion until they are unregistered.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
static unregister_dataframe_by_name(user_defined_name: str) str[source]

Function to unregister DataFrame object by name which was registered with the arkouda server via register().

Parameters:

user_defined_name (str) – Name under which the DataFrame object was registered.

Raises:
  • TypeError – If user_defined_name is not a string.

  • RegistrationError – If there is an issue attempting to unregister any underlying components.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister_dataframe_by_name("my_table_name")
>>> df.is_registered()
False
update_hdf(prefix_path: str, index=False, columns=None, repack: bool = True)[source]

Overwrite the dataset with the name provided with this dataframe. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (List, default=None) – List of columns to include in the file. If None, writes out all columns.

  • repack (bool, default=True) – HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Returns:

Success message if successful.

Return type:

str

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray.

Notes

If file does not contain File_Format attribute to indicate how it was saved,

the file name is checked for _LOCALE#### to determine if it is distributed.

If the dataset provided does not exist, it will be added.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_hdf(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

A

B

0

1

3

1

2

4

>>> df2 = ak.DataFrame({"A":[5,6],"B":[7,8]})
>>> df2.update_hdf(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

A

B

0

5

7

1

6

8

update_nrows()[source]

Computes the number of rows on the arkouda server and updates the size parameter.

class arkouda.DataFrame(initialdata=None, index=None, columns=None)[source]

Bases: collections.UserDict

A DataFrame structure based on arkouda arrays.

Parameters:
  • initialdata (List or dictionary of lists, tuples, or pdarrays) – Each list/dictionary entry corresponds to one column of the data and should be a homogenous type. Different columns may have different types. If using a dictionary, keys should be strings.

  • index (Index, pdarray, or Strings) – Index for the resulting frame. Defaults to an integer range.

  • columns (List, tuple, pdarray, or Strings) – Column labels to use if the data does not include them. Elements must be strings. Defaults to an stringified integer range.

Examples

Create an empty DataFrame and add a column of data:

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame()
>>> df['a'] = ak.array([1,2,3])
>>> display(df)

a

0

1

1

2

2

3

Create a new DataFrame using a dictionary of data:

>>> userName = ak.array(['Alice', 'Bob', 'Alice', 'Carol', 'Bob', 'Alice'])
>>> userID = ak.array([111, 222, 111, 333, 222, 111])
>>> item = ak.array([0, 0, 1, 1, 2, 0])
>>> day = ak.array([5, 5, 6, 5, 6, 6])
>>> amount = ak.array([0.5, 0.6, 1.1, 1.2, 4.3, 0.6])
>>> df = ak.DataFrame({'userName': userName, 'userID': userID,
>>>            'item': item, 'day': day, 'amount': amount})
>>> display(df)

userName

userID

item

day

amount

0

Alice

111

0

5

0.5

1

Bob

222

0

5

0.6

2

Alice

111

1

6

1.1

3

Carol

333

1

5

1.2

4

Bob

222

2

6

4.3

5

Alice

111

0

6

0.6

Indexing works slightly differently than with pandas:

>>> df[0]

keys

values

userName

Alice

userID

111

item

0

day

5

amount

0.5

>>> df['userID']
array([111, 222, 111, 333, 222, 111])
>>> df['userName']
array(['Alice', 'Bob', 'Alice', 'Carol', 'Bob', 'Alice'])
>>> df[ak.array([1,3,5])]

userName

userID

item

day

amount

0

Bob

222

0

5

0.6

1

Carol

333

1

5

1.2

2

Alice

111

0

6

0.6

Compute the stride:

>>> df[1:5:1]

userName

userID

item

day

amount

0

Bob

222

0

5

0.6

1

Alice

111

1

6

1.1

2

Carol

333

1

5

1.2

3

Bob

222

2

6

4.3

>>> df[ak.array([1,2,3])]

userName

userID

item

day

amount

0

Bob

222

0

5

0.6

1

Alice

111

1

6

1.1

2

Carol

333

1

5

1.2

>>> df[['userID', 'day']]

userID

day

0

111

5

1

222

5

2

111

6

3

333

5

4

222

6

5

111

6

property columns

An Index where the values are the column names of the dataframe.

Returns:

The values of the index are the column names of the dataframe.

Return type:

arkouda.index.Index

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df

col1

col2

0

1

3

1

2

4

>>> df.columns
Index(array(['col1', 'col2']), dtype='<U0')
property dtypes

The dtypes of the dataframe.

Returns:

dtypes – The dtypes of the dataframe.

Return type:

arkouda.row.Row

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': ["a", "b"]})
>>> df

col1

col2

0

1

a

1

2

b

>>> df.dtypes

keys

values

col1

int64

col2

str

property empty

Whether the dataframe is empty.

Returns:

True if the dataframe is empty, otherwise False.

Return type:

bool

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({})
>>> df
 0 rows x 0 columns
>>> df.empty
True
property index

The index of the dataframe.

Returns:

The index of the dataframe.

Return type:

arkouda.index.Index or arkouda.index.MultiIndex

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df

col1

col2

0

1

3

1

2

4

>>> df.index
Index(array([0 1]), dtype='int64')
property info

Returns a summary string of this dataframe.

Returns:

A summary string of this dataframe.

Return type:

str

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': ["a", "b"]})
>>> df

col1

col2

0

1

a

1

2

b

>>> df.info
"DataFrame(['col1', 'col2'], 2 rows, 20 B)"
property shape

The shape of the dataframe.

Returns:

Tuple of array dimensions.

Return type:

tuple of int

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df

col1

col2

0

1

4

1

2

5

2

3

6

>>> df.shape
(3, 2)
property size

Returns the number of bytes on the arkouda server.

Returns:

The number of bytes on the arkouda server.

Return type:

int

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df

col1

col2

0

1

4

1

2

5

2

3

6

>>> df.size
6
objType = 'DataFrame'
GroupBy(keys, use_series=False, as_index=True, dropna=True)[source]

Group the dataframe by a column or a list of columns.

Parameters:
  • keys (str or list of str) – An (ordered) list of column names or a single string to group by.

  • use_series (bool, default=False) – If True, returns an arkouda.dataframe.GroupBy object. Otherwise an arkouda.groupbyclass.GroupBy object.

  • as_index (bool, default=True) – If True, groupby columns will be set as index otherwise, the groupby columns will be treated as DataFrame columns.

  • dropna (bool, default=True) – If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Returns:

If use_series = True, returns an arkouda.dataframe.GroupBy object. Otherwise returns an arkouda.groupbyclass.GroupBy object.

Return type:

arkouda.dataframe.GroupBy or arkouda.groupbyclass.GroupBy

See also

arkouda.GroupBy

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1.0, 1.0, 2.0, np.nan], 'col2': [4, 5, 6, 7]})
>>> df

col1

col2

0

1

4

1

1

5

2

2

6

3

nan

7

>>> df.GroupBy("col1")
<arkouda.groupbyclass.GroupBy at 0x7f2cf23e10c0>
>>> df.GroupBy("col1").size()
(array([1.00000000000000000 2.00000000000000000]), array([2 1]))
>>> df.GroupBy("col1",use_series=True)
col1
1.0    2
2.0    1
dtype: int64
>>> df.GroupBy("col1",use_series=True, as_index = False).size()

col1

size

0

1

2

1

2

1

all(axis=0) arkouda.series.Series | bool[source]

Return whether all elements are True, potentially over an axis.

Returns True unless there at least one element along a Dataframe axis that is False.

Currently, will ignore any columns that are not type bool. This is equivalent to the pandas option bool_only=True.

Parameters:

axis ({0 or ‘index’, 1 or ‘columns’, None}, default = 0) –

Indicate which axis or axes should be reduced.

0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.

1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.

None : reduce all axes, return a scalar.

Return type:

arkouda.series.Series or bool

Raises:

ValueError – Raised if axis does not have a value in {0 or ‘index’, 1 or ‘columns’, None}.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"A":[True,True,True,False],"B":[True,True,True,False],
...          "C":[True,False,True,False],"D":[True,True,True,True]})

A

B

C

D

0

True

True

True

True

1

True

True

False

True

2

True

True

True

True

3

False

False

False

True

>>> df.all(axis=0)
A    False
B    False
C    False
D     True
dtype: bool
>>> df.all(axis=1)
0     True
1    False
2     True
3    False
dtype: bool
>>> df.all(axis=None)
False
any(axis=0) arkouda.series.Series | bool[source]

Return whether any element is True, potentially over an axis.

Returns False unless there is at least one element along a Dataframe axis that is True.

Currently, will ignore any columns that are not type bool. This is equivalent to the pandas option bool_only=True.

Parameters:

axis ({0 or ‘index’, 1 or ‘columns’, None}, default = 0) –

Indicate which axis or axes should be reduced.

0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.

1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.

None : reduce all axes, return a scalar.

Return type:

arkouda.series.Series or bool

Raises:

ValueError – Raised if axis does not have a value in {0 or ‘index’, 1 or ‘columns’, None}.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"A":[True,True,True,False],"B":[True,True,True,False],
...          "C":[True,False,True,False],"D":[False,False,False,False]})

A

B

C

D

0

True

True

True

False

1

True

True

False

False

2

True

True

True

False

3

False

False

False

False

>>> df.any(axis=0)
A     True
B     True
C     True
D    False
dtype: bool
>>> df.any(axis=1)
0     True
1     True
2     True
3    False
dtype: bool
>>> df.any(axis=None)
True
append(other, ordered=True)[source]

Concatenate data from ‘other’ onto the end of this DataFrame, in place.

Explicitly, use the arkouda concatenate function to append the data from each column in other to the end of self. This operation is done in place, in the sense that the underlying pdarrays are updated from the result of the arkouda concatenate function, rather than returning a new DataFrame object containing the result.

Parameters:
  • other (DataFrame) – The DataFrame object whose data will be appended to this DataFrame.

  • ordered (bool, default=True) – If False, allow rows to be interleaved for better performance (but data within a row remains together). By default, append all rows to the end, in input order.

Returns:

Appending occurs in-place, but result is returned for compatibility.

Return type:

self

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df1 = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})

col1

col2

0

1

3

1

2

4

>>> df2 = ak.DataFrame({'col1': [3], 'col2': [5]})

col1

col2

0

3

5

>>> df1.append(df2)
>>> df1

col1

col2

0

1

3

1

2

4

2

3

5

apply_permutation(perm)[source]

Apply a permutation to an entire DataFrame. The operation is done in place and the original DataFrame will be modified.

This may be useful if you want to unsort an DataFrame, or even to apply an arbitrary permutation such as the inverse of a sorting permutation.

Parameters:

perm (pdarray) – A permutation array. Should be the same size as the data arrays, and should consist of the integers [0,size-1] in some order. Very minimal testing is done to ensure this is a permutation.

Return type:

None

See also

sort

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})

col1

col2

0

1

4

1

2

5

2

3

6

>>> perm_arry = ak.array([0, 2, 1])
>>> df.apply_permutation(perm_arry)
>>> display(df)

col1

col2

0

1

4

1

3

6

2

2

5

argsort(key, ascending=True)[source]

Return the permutation that sorts the dataframe by key.

Parameters:
  • key (str) – The key to sort on.

  • ascending (bool, default = True) – If true, sort the key in ascending order. Otherwise, sort the key in descending order.

Returns:

The permutation array that sorts the data on key.

Return type:

arkouda.pdarrayclass.pdarray

See also

coargsort

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1.1, 3.1, 2.1], 'col2': [6, 5, 4]})
>>> display(df)

col1

col2

0

1.1

6

1

3.1

5

2

2.1

4

>>> df.argsort('col1')
array([0 2 1])
>>> sorted_df1 = df[df.argsort('col1')]
>>> display(sorted_df1)

col1

col2

0

1.1

6

1

2.1

4

2

3.1

5

>>> df.argsort('col2')
array([2 1 0])
>>> sorted_df2 = df[df.argsort('col2')]
>>> display(sorted_df2)

col1

col2

0

2.1

4

1

3.1

5

2

1.1

6

static attach(user_defined_name: str) DataFrame[source]

Function to return a DataFrame object attached to the registered name in the arkouda server which was registered using register().

Parameters:

user_defined_name (str) – user defined name which DataFrame object was registered under.

Returns:

The DataFrame object created by re-attaching to the corresponding server components.

Return type:

arkouda.dataframe.DataFrame

Raises:

RegistrationError – if user_defined_name is not registered

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
coargsort(keys, ascending=True)[source]

Return the permutation that sorts the dataframe by keys.

Note: Sorting using Strings may not yield correct sort order.

Parameters:

keys (list of str) – The keys to sort on.

Returns:

The permutation array that sorts the data on keys.

Return type:

arkouda.pdarrayclass.pdarray

Example

>>> df = ak.DataFrame({'col1': [2, 2, 1], 'col2': [3, 4, 3], 'col3':[5, 6, 7]})
>>> display(df)

col1

col2

col3

0

2

3

5

1

2

4

6

2

1

3

7

>>> df.coargsort(['col1', 'col2'])
array([2 0 1])
>>>
classmethod concat(items, ordered=True)[source]

Essentially an append, but different formatting.

copy(deep=True)[source]

Make a copy of this object’s data.

When deep = True (default), a new object will be created with a copy of the calling object’s data. Modifications to the data of the copy will not be reflected in the original object.

When deep = False a new object will be created without copying the calling object’s data. Any changes to the data of the original object will be reflected in the shallow copy, and vice versa.

Parameters:

deep (bool, default=True) – When True, return a deep copy. Otherwise, return a shallow copy.

Returns:

A deep or shallow copy according to caller specification.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> display(df)

col1

col2

0

1

3

1

2

4

>>> df_deep = df.copy(deep=True)
>>> df_deep['col1'] +=1
>>> display(df)

col1

col2

0

1

3

1

2

4

>>> df_shallow = df.copy(deep=False)
>>> df_shallow['col1'] +=1
>>> display(df)

col1

col2

0

2

3

1

3

4

corr() DataFrame[source]

Return new DataFrame with pairwise correlation of columns.

Returns:

Arkouda DataFrame containing correlation matrix of all columns.

Return type:

arkouda.dataframe.DataFrame

Raises:

RuntimeError – Raised if there’s a server-side error thrown.

See also

pdarray.corr

Notes

Generates the correlation matrix using Pearson R for all columns.

Attempts to convert to numeric values where possible for inclusion in the matrix.

Example

>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [-1, -2]})
>>> display(df)

col1

col2

0

1

-1

1

2

-2

>>> corr = df.corr()

col1

col2

col1

1

-1

col2

-1

1

count(axis: int | str = 0, numeric_only=False) arkouda.series.Series[source]

Count non-NA cells for each column or row.

The values np.NaN are considered NA.

Parameters:
  • axis ({0 or 'index', 1 or 'columns'}, default 0) – If 0 or ‘index’ counts are generated for each column. If 1 or ‘columns’ counts are generated for each row.

  • numeric_only (bool = False) – Include only float, int or boolean data.

Returns:

For each column/row the number of non-NA/null entries.

Return type:

arkouda.series.Series

Raises:

ValueError – Raised if axis is not 0, 1, ‘index’, or ‘columns’.

See also

GroupBy.count

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame({'col_A': ak.array([7, np.nan]), 'col_B':ak.array([1, 9])})
>>> display(df)

col_A

col_B

0

7

1

1

nan

9

>>> df.count()
col_A    1
col_B    2
dtype: int64
>>> df = ak.DataFrame({'col_A': ak.array(["a","b","c"]), 'col_B':ak.array([1, np.nan, np.nan])})
>>> display(df)

col_A

col_B

0

a

1

1

b

nan

2

c

nan

>>> df.count()
col_A    3
col_B    1
dtype: int64
>>> df.count(numeric_only=True)
col_B    1
dtype: int64
>>> df.count(axis=1)
0    2
1    1
2    1
dtype: int64
drop(keys: str | int | List[str | int], axis: str | int = 0, inplace: bool = False) None | DataFrame[source]

Drop column/s or row/s from the dataframe.

Parameters:
  • keys (str, int or list) – The labels to be dropped on the given axis.

  • axis (int or str) – The axis on which to drop from. 0/’index’ - drop rows, 1/’columns’ - drop columns.

  • inplace (bool, default=False) – When True, perform the operation on the calling object. When False, return a new object.

Returns:

DateFrame when inplace=False; None when inplace=True

Return type:

arkouda.dataframe.DataFrame or None

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> display(df)

col1

col2

0

1

3

1

2

4

Drop column

>>> df.drop('col1', axis = 1)

col2

0

3

1

4

Drop row

>>> df.drop(0, axis = 0)

col1

col2

0

2

4

drop_duplicates(subset=None, keep='first')[source]

Drops duplcated rows and returns resulting DataFrame.

If a subset of the columns are provided then only one instance of each duplicated row will be returned (keep determines which row).

Parameters:
  • subset (Iterable) – Iterable of column names to use to dedupe.

  • keep ({'first', 'last'}, default='first') – Determines which duplicates (if any) to keep.

Returns:

DataFrame with duplicates removed.

Return type:

arkouda.dataframe.DataFrame

Example

>>> df = ak.DataFrame({'col1': [1, 2, 2, 3], 'col2': [4, 5, 5, 6]})
>>> display(df)

col1

col2

0

1

4

1

2

5

2

2

5

3

3

6

>>> df.drop_duplicates()

col1

col2

0

1

4

1

2

5

2

3

6

dropna(axis: int | str = 0, how: str | None = None, thresh: int | None = None, ignore_index: bool = False) DataFrame[source]

Remove missing values.

Parameters:
  • axis ({0 or 'index', 1 or 'columns'}, default = 0) –

    Determine if rows or columns which contain missing values are removed.

    0, or ‘index’: Drop rows which contain missing values.

    1, or ‘columns’: Drop columns which contain missing value.

    Only a single axis is allowed.

  • how ({'any', 'all'}, default='any') –

    Determine if row or column is removed from DataFrame, when we have at least one NA or all NA.

    ’any’: If any NA values are present, drop that row or column.

    ’all’: If all values are NA, drop that row or column.

  • thresh (int, optional) – Require that many non - NA values.Cannot be combined with how.

  • ignore_index (bool, default False) – If True, the resulting axis will be labeled 0, 1, …, n - 1.

Returns:

DataFrame with NA entries dropped from it.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame(
    {
        "A": [True, True, True, True],
        "B": [1, np.nan, 2, np.nan],
        "C": [1, 2, 3, np.nan],
        "D": [False, False, False, False],
        "E": [1, 2, 3, 4],
        "F": ["a", "b", "c", "d"],
        "G": [1, 2, 3, 4],
    }
   )
>>> display(df)

A

B

C

D

E

F

G

0

True

1

1

False

1

a

1

1

True

nan

2

False

2

b

2

2

True

2

3

False

3

c

3

3

True

nan

nan

False

4

d

4

>>> df.dropna()

A

B

C

D

E

F

G

0

True

1

1

False

1

a

1

1

True

2

3

False

3

c

3

>>> df.dropna(axis=1)

A

D

E

F

G

0

True

False

1

a

1

1

True

False

2

b

2

2

True

False

3

c

3

3

True

False

4

d

4

>>> df.dropna(axis=1, thresh=3)

A

C

D

E

F

G

0

True

1

False

1

a

1

1

True

2

False

2

b

2

2

True

3

False

3

c

3

3

True

nan

False

4

d

4

>>> df.dropna(axis=1, how="all")

A

B

C

D

E

F

G

0

True

1

1

False

1

a

1

1

True

nan

2

False

2

b

2

2

True

2

3

False

3

c

3

3

True

nan

nan

False

4

d

4

filter_by_range(keys, low=1, high=None)[source]

Find all rows where the value count of the items in a given set of columns (keys) is within the range [low, high].

To filter by a specific value, set low == high.

Parameters:
  • keys (str or list of str) – The names of the columns to group by.

  • low (int, default=1) – The lowest value count.

  • high (int, default=None) – The highest value count, default to unlimited.

Returns:

An array of boolean values for qualified rows in this DataFrame.

Return type:

arkouda.pdarrayclass.pdarray

Example

>>> df = ak.DataFrame({'col1': [1, 2, 2, 2, 3, 3], 'col2': [4, 5, 6, 7, 8, 9]})
>>> display(df)

col1

col2

0

1

4

1

2

5

2

2

6

3

2

7

4

3

8

5

3

9

>>> df.filter_by_range("col1", low=1, high=2)
array([True False False False True True])
>>> filtered_df = df[df.filter_by_range("col1", low=1, high=2)]
>>> display(filtered_df)

col1

col2

0

1

4

1

3

8

2

3

9

classmethod from_pandas(pd_df)[source]

Copy the data from a pandas DataFrame into a new arkouda.dataframe.DataFrame.

Parameters:

pd_df (pandas.DataFrame) – A pandas DataFrame to convert.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import pandas as pd
>>> pd_df = pd.DataFrame({"A":[1,2],"B":[3,4]})
>>> type(pd_df)
pandas.core.frame.DataFrame
>>> display(pd_df)

A

B

0

1

3

1

2

4

>>> ak_df = DataFrame.from_pandas(pd_df)
>>> type(ak_df)
arkouda.dataframe.DataFrame
>>> display(ak_df)

A

B

0

1

3

1

2

4

classmethod from_return_msg(rep_msg)[source]

Creates a DataFrame object from an arkouda server response message.

Parameters:

rep_msg (string) – Server response message used to create a DataFrame.

Return type:

arkouda.dataframe.DataFrame

groupby(keys, use_series=True, as_index=True, dropna=True)[source]

Group the dataframe by a column or a list of columns. Alias for GroupBy.

Parameters:
  • keys (str or list of str) – An (ordered) list of column names or a single string to group by.

  • use_series (bool, default=True) – If True, returns an arkouda.dataframe.GroupBy object. Otherwise an arkouda.groupbyclass.GroupBy object.

  • as_index (bool, default=True) – If True, groupby columns will be set as index otherwise, the groupby columns will be treated as DataFrame columns.

  • dropna (bool, default=True) – If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Returns:

If use_series = True, returns an arkouda.dataframe.GroupBy object. Otherwise returns an arkouda.groupbyclass.GroupBy object.

Return type:

arkouda.dataframe.GroupBy or arkouda.groupbyclass.GroupBy

See also

arkouda.GroupBy

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1.0, 1.0, 2.0, np.nan], 'col2': [4, 5, 6, 7]})
>>> df

col1

col2

0

1

4

1

1

5

2

2

6

3

nan

7

>>> df.GroupBy("col1")
<arkouda.groupbyclass.GroupBy at 0x7f2cf23e10c0>
>>> df.GroupBy("col1").size()
(array([1.00000000000000000 2.00000000000000000]), array([2 1]))
>>> df.GroupBy("col1",use_series=True)
col1
1.0    2
2.0    1
dtype: int64
>>> df.GroupBy("col1",use_series=True, as_index = False).size()

col1

size

0

1

2

1

2

1

head(n=5)[source]

Return the first n rows.

This function returns the first n rows of the the dataframe. It is useful for quickly verifying data, for example, after sorting or appending rows.

Parameters:

n (int, default = 5) – Number of rows to select.

Returns:

The first n rows of the DataFrame.

Return type:

arkouda.dataframe.DataFrame

See also

tail

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': ak.arange(10), 'col2': -1 * ak.arange(10)})
>>> display(df)

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

5

5

-5

6

6

-6

7

7

-7

8

8

-8

9

9

-9

>>> df.head()

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

>>> df.head(n=2)

col1

col2

0

0

0

1

1

-1

is_registered() bool[source]

Return True if the object is contained in the registry.

Returns:

Indicates if the object is contained in the registry.

Return type:

bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mismatch of registered components.

Notes

Objects registered with the server are immune to deletion until they are unregistered.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
isin(values: arkouda.pdarrayclass.pdarray | Dict | arkouda.series.Series | DataFrame) DataFrame[source]

Determine whether each element in the DataFrame is contained in values.

Parameters:

values (pdarray, dict, Series, or DataFrame) – The values to check for in DataFrame. Series can only have a single index.

Returns:

Arkouda DataFrame of booleans showing whether each element in the DataFrame is contained in values.

Return type:

arkouda.dataframe.DataFrame

See also

ak.Series.isin

Notes

  • Pandas supports values being an iterable type. In arkouda, we replace this with pdarray.

  • Pandas supports ~ operations. Currently, ak.DataFrame does not support this.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col_A': ak.array([7, 3]), 'col_B':ak.array([1, 9])})
>>> display(df)

col_A

col_B

0

7

1

1

3

9

When values is a pdarray, check every value in the DataFrame to determine if it exists in values.

>>> df.isin(ak.array([0, 1]))

col_A

col_B

0

0

1

1

0

0

When values is a dict, the values in the dict are passed to check the column indicated by the key.

>>> df.isin({'col_A': ak.array([0, 3])})

col_A

col_B

0

0

0

1

1

0

When values is a Series, each column is checked if values is present positionally. This means that for True to be returned, the indexes must be the same.

>>> i = ak.Index(ak.arange(2))
>>> s = ak.Series(data=[3, 9], index=i)
>>> df.isin(s)

col_A

col_B

0

0

0

1

0

1

When values is a DataFrame, the index and column must match. Note that 9 is not found because the column name does not match.

>>> other_df = ak.DataFrame({'col_A':ak.array([7, 3]), 'col_C':ak.array([0, 9])})
>>> df.isin(other_df)

col_A

col_B

0

1

0

1

1

0

isna() DataFrame[source]

Detect missing values.

Return a boolean same-sized object indicating if the values are NA. numpy.NaN values get mapped to True values. Everything else gets mapped to False values.

Returns:

Mask of bool values for each element in DataFrame that indicates whether an element is an NA value.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame({"A": [np.nan, 2, 2, 3], "B": [3, np.nan, 5, 6],
...          "C": [1, np.nan, 2, np.nan], "D":["a","b","c","d"]})
>>> display(df)

A

B

C

D

0

nan

3

1

a

1

2

nan

nan

b

2

2

5

2

c

3

3

6

nan

d

>>> df.isna()
       A      B      C      D
0   True  False  False  False
1  False   True   True  False
2  False  False  False  False
3  False  False   True  False (4 rows x 4 columns)
classmethod load(prefix_path, file_format='INFER')[source]

Load dataframe from file. file_format needed for consistency with other load functions.

Parameters:
  • prefix_path (str) – The prefix path for the data.

  • file_format (string, default = "INFER")

Returns:

A dataframe loaded from the prefix_path.

Return type:

arkouda.dataframe.DataFrame

Examples

To store data in <my_dir>/my_data_LOCALE0000, use “<my_dir>/my_data” as the prefix.

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf5_output','my_data')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A": ak.arange(5), "B": -1 * ak.arange(5)})
>>> df.save(my_path, file_type="distribute")
>>> df.load(my_path)

A

B

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

memory_usage(index=True, unit='B') arkouda.series.Series[source]

Return the memory usage of each column in bytes.

The memory usage can optionally include the contribution of the index.

Parameters:
  • index (bool, default True) – Specifies whether to include the memory usage of the DataFrame’s index in returned Series. If index=True, the memory usage of the index is the first item in the output.

  • unit (str, default = "B") – Unit to return. One of {‘B’, ‘KB’, ‘MB’, ‘GB’}.

Returns:

A Series whose index is the original column names and whose values is the memory usage of each column in bytes.

Return type:

Series

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> dtypes = [ak.int64, ak.float64,  ak.bool]
>>> data = dict([(str(t), ak.ones(5000, dtype=ak.int64).astype(t)) for t in dtypes])
>>> df = ak.DataFrame(data)
>>> display(df.head())

int64

float64

bool

0

1

1

True

1

1

1

True

2

1

1

True

3

1

1

True

4

1

1

True

>>> df.memory_usage()

0

Index

40000

int64

40000

float64

40000

bool

5000

>>> df.memory_usage(index=False)

0

int64

40000

float64

40000

bool

5000

>>> df.memory_usage(unit="KB")

0

Index

39.0625

int64

39.0625

float64

39.0625

bool

4.88281

To get the approximate total memory usage:

>>>  df.memory_usage(index=True).sum()
memory_usage_info(unit='GB')[source]

A formatted string representation of the size of this DataFrame.

Parameters:

unit (str, default = "GB") – Unit to return. One of {‘KB’, ‘MB’, ‘GB’}.

Returns:

A string representation of the number of bytes used by this DataFrame in [unit]s.

Return type:

str

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': ak.arange(1000), 'col2': ak.arange(1000)})
>>> df.memory_usage_info()
'0.00 GB'
>>> df.memory_usage_info(unit="KB")
'15 KB'
merge(right: DataFrame, on: str | List[str] | None = None, how: str = 'inner', left_suffix: str = '_x', right_suffix: str = '_y', convert_ints: bool = True, sort: bool = True) DataFrame[source]

Merge Arkouda DataFrames with a database-style join. The resulting dataframe contains rows from both DataFrames as specified by the merge condition (based on the “how” and “on” parameters).

Based on pandas merge functionality. https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html

Parameters:
  • right (DataFrame) – The Right DataFrame to be joined.

  • on (Optional[Union[str, List[str]]] = None) – The name or list of names of the DataFrame column(s) to join on. If on is None, this defaults to the intersection of the columns in both DataFrames.

  • how ({"inner", "left", "right}, default = "inner") – The merge condition. Must be “inner”, “left”, or “right”.

  • left_suffix (str, default = "_x") – A string indicating the suffix to add to columns from the left dataframe for overlapping column names in both left and right. Defaults to “_x”. Only used when how is “inner”.

  • right_suffix (str, default = "_y") – A string indicating the suffix to add to columns from the right dataframe for overlapping column names in both left and right. Defaults to “_y”. Only used when how is “inner”.

  • convert_ints (bool = True) – If True, convert columns with missing int values (due to the join) to float64. This is to match pandas. If False, do not convert the column dtypes. This has no effect when how = “inner”.

  • sort (bool = True) – If True, DataFrame is returned sorted by “on”. Otherwise, the DataFrame is not sorted.

Returns:

Joined Arkouda DataFrame.

Return type:

arkouda.dataframe.DataFrame

Note

Multiple column joins are only supported for integer columns.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> left_df = ak.DataFrame({'col1': ak.arange(5), 'col2': -1 * ak.arange(5)})
>>> display(left_df)

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

>>> right_df = ak.DataFrame({'col1': 2 * ak.arange(5), 'col2': 2 * ak.arange(5)})
>>> display(right_df)

col1

col2

0

0

0

1

2

2

2

4

4

3

6

6

4

8

8

>>> left_df.merge(right_df, on = "col1")

col1

col2_x

col2_y

0

0

0

0

1

2

-2

2

2

4

-4

4

>>> left_df.merge(right_df, on = "col1", how = "left")

col1

col2_y

col2_x

0

0

0

0

1

1

nan

-1

2

2

2

-2

3

3

nan

-3

4

4

4

-4

>>> left_df.merge(right_df, on = "col1", how = "right")

col1

col2_x

col2_y

0

0

0

0

1

2

-2

2

2

4

-4

4

3

6

nan

6

4

8

nan

8

>>> left_df.merge(right_df, on = "col1", how = "outer")

col1

col2_y

col2_x

0

0

0

0

1

1

nan

-1

2

2

2

-2

3

3

nan

-3

4

4

4

-4

5

6

6

nan

6

8

8

nan

notna() DataFrame[source]

Detect existing (non-missing) values.

Return a boolean same-sized object indicating if the values are not NA. numpy.NaN values get mapped to False values.

Returns:

Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame({"A": [np.nan, 2, 2, 3], "B": [3, np.nan, 5, 6],
...          "C": [1, np.nan, 2, np.nan], "D":["a","b","c","d"]})
>>> display(df)

A

B

C

D

0

nan

3

1

a

1

2

nan

nan

b

2

2

5

2

c

3

3

6

nan

d

>>> df.notna()
       A      B      C     D
0  False   True   True  True
1   True  False  False  True
2   True   True   True  True
3   True   True  False  True (4 rows x 4 columns)
classmethod read_csv(filename: str, col_delim: str = ',')[source]

Read the columns of a CSV file into an Arkouda DataFrame. If the file contains the appropriately formatted header, typed data will be returned. Otherwise, all data will be returned as a Strings objects.

Parameters:
  • filename (str) – Filename to read data from.

  • col_delim (str, default=",") – The delimiter for columns within the data.

Returns:

Arkouda DataFrame containing the columns from the CSV file.

Return type:

arkouda.dataframe.DataFrame

Raises:
  • ValueError – Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist.

  • RuntimeError – Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

  • TypeError – Raised if we receive an unknown arkouda_type returned from the server.

See also

to_csv

Notes

  • CSV format is not currently supported by load/load_all operations.

  • The column delimiter is expected to be the same for column names and data.

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (”\n”) at this time.

  • Unlike other file formats, CSV files store Strings as their UTF-8 format instead of storing

bytes as uint(8).

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'csv_output','my_data')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_csv(my_path)
>>> df2 = DataFrame.read_csv(my_path + "_LOCALE0000")
>>> display(df2)

A

B

0

1

3

1

2

4

register(user_defined_name: str) DataFrame[source]

Register this DataFrame object and underlying components with the Arkouda server.

Parameters:

user_defined_name (str) – User defined name the DataFrame is to be registered under. This will be the root name for underlying components.

Returns:

The same DataFrame which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different DataFrames with the same name.

Return type:

arkouda.dataframe.DataFrame

Raises:
  • TypeError – Raised if user_defined_name is not a str.

  • RegistrationError – If the server was unable to register the DataFrame with the user_defined_name.

Notes

Objects registered with the server are immune to deletion until they are unregistered.

Any changes made to a DataFrame object after registering with the server may not be reflected in attached copies.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
rename(mapper: Callable | Dict | None = None, index: Callable | Dict | None = None, column: Callable | Dict | None = None, axis: str | int = 0, inplace: bool = False) DataFrame | None[source]

Rename indexes or columns according to a mapping.

Parameters:
  • mapper (callable or dict-like, Optional) – Function or dictionary mapping existing values to new values. Nonexistent names will not raise an error. Uses the value of axis to determine if renaming column or index

  • column (callable or dict-like, Optional) – Function or dictionary mapping existing column names to new column names. Nonexistent names will not raise an error. When this is set, axis is ignored.

  • index (callable or dict-like, Optional) – Function or dictionary mapping existing index names to new index names. Nonexistent names will not raise an error. When this is set, axis is ignored.

  • axis (int or str, default=0) – Indicates which axis to perform the rename. 0/”index” - Indexes 1/”column” - Columns

  • inplace (bool, default=False) – When True, perform the operation on the calling object. When False, return a new object.

Returns:

DateFrame when inplace=False; None when inplace=True.

Return type:

arkouda.dataframe.DataFrame or None

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"A": ak.array([1, 2, 3]), "B": ak.array([4, 5, 6])})
>>> display(df)

A

B

0

1

4

1

2

5

2

3

6

Rename columns using a mapping:

>>> df.rename(column={'A':'a', 'B':'c'})

a

c

0

1

4

1

2

5

2

3

6

Rename indexes using a mapping:

>>> df.rename(index={0:99, 2:11})

A

B

0

1

4

1

2

5

2

3

6

Rename using an axis style parameter:

>>> df.rename(str.lower, axis='column')

a

b

0

1

4

1

2

5

2

3

6

reset_index(size: int | None = None, inplace: bool = False) None | DataFrame[source]

Set the index to an integer range.

Useful if this dataframe is the result of a slice operation from another dataframe, or if you have permuted the rows and no longer need to keep that ordering on the rows.

Parameters:
  • size (int, optional) – If size is passed, do not attempt to determine size based on existing column sizes. Assume caller handles consistency correctly.

  • inplace (bool, default=False) – When True, perform the operation on the calling object. When False, return a new object.

Returns:

DateFrame when inplace=False; None when inplace=True.

Return type:

arkouda.dataframe.DataFrame or None

Note

Pandas adds a column ‘index’ to indicate the original index. Arkouda does not currently support this behavior.

Example

>>> df = ak.DataFrame({"A": ak.array([1, 2, 3]), "B": ak.array([4, 5, 6])})
>>> display(df)

A

B

0

1

4

1

2

5

2

3

6

>>> perm_df = df[ak.array([0,2,1])]
>>> display(perm_df)

A

B

0

1

4

1

3

6

2

2

5

>>> perm_df.reset_index()

A

B

0

1

4

1

3

6

2

2

5

sample(n=5)[source]

Return a random sample of n rows.

Parameters:

n (int, default=5) – Number of rows to return.

Returns:

The sampled n rows of the DataFrame.

Return type:

arkouda.dataframe.DataFrame

Example

>>> df = ak.DataFrame({"A": ak.arange(5), "B": -1 * ak.arange(5)})
>>> display(df)

A

B

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

Random output of size 3:

>>> df.sample(n=3)

A

B

0

0

0

1

1

-1

2

4

-4

save(path, index=False, columns=None, file_format='HDF5', file_type='distribute', compression: str | None = None)[source]

DEPRECATED Save DataFrame to disk, preserving column names.

Parameters:
  • path (str) – File path to save data.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (list, default=None) – List of columns to include in the file. If None, writes out all columns.

  • file_format (str, default='HDF5') – ‘HDF5’ or ‘Parquet’. Defaults to ‘HDF5’

  • file_type (str, default=distribute) – “single” or “distribute” If single, will right a single file to locale 0.

  • compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Compression type. Only used for Parquet

Notes

This method saves one file per locale of the arkouda server. All files are prefixed by the path argument and suffixed by their locale number.

See also

to_parquet, to_hdf

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf5_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A": ak.arange(5), "B": -1 * ak.arange(5)})
>>> df.save(my_path + '/my_data', file_type="single")
>>> df.load(my_path + '/my_data')

A

B

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

sort_index(ascending=True)[source]

Sort the DataFrame by indexed columns.

Note: Fails on sort order of arkouda.strings.Strings columns when multiple columns being sorted.

Parameters:

ascending (bool, default = True) – Sort values in ascending (default) or descending order.

Example

>>> df = ak.DataFrame({'col1': [1.1, 3.1, 2.1], 'col2': [6, 5, 4]},
...          index = Index(ak.array([2,0,1]), name="idx"))
>>> display(df)

idx

col1

col2

0

1.1

6

1

3.1

5

2

2.1

4

>>> df.sort_index()

idx

col1

col2

0

3.1

5

1

2.1

4

2

1.1

6

sort_values(by=None, ascending=True)[source]

Sort the DataFrame by one or more columns.

If no column is specified, all columns are used.

Note: Fails on order of arkouda.strings.Strings columns when multiple columns being sorted.

Parameters:
  • by (str or list/tuple of str, default = None) – The name(s) of the column(s) to sort by.

  • ascending (bool, default = True) – Sort values in ascending (default) or descending order.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [2, 2, 1], 'col2': [3, 4, 3], 'col3':[5, 6, 7]})
>>> display(df)

col1

col2

col3

0

2

3

5

1

2

4

6

2

1

3

7

>>> df.sort_values()

col1

col2

col3

0

1

3

7

1

2

3

5

2

2

4

6

>>> df.sort_values("col3")

col1

col2

col3

0

1

3

7

1

2

3

5

2

2

4

6

tail(n=5)[source]

Return the last n rows.

This function returns the last n rows for the dataframe. It is useful for quickly testing if your object has the right type of data in it.

Parameters:

n (int, default=5) – Number of rows to select.

Returns:

The last n rows of the DataFrame.

Return type:

arkouda.dataframe.DataFrame

See also

arkouda.dataframe.head

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': ak.arange(10), 'col2': -1 * ak.arange(10)})
>>> display(df)

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

5

5

-5

6

6

-6

7

7

-7

8

8

-8

9

9

-9

>>> df.tail()

col1

col2

0

5

-5

1

6

-6

2

7

-7

3

8

-8

4

9

-9

>>> df.tail(n=2)

col1

col2

0

8

-8

1

9

-9

to_csv(path: str, index: bool = False, columns: List[str] | None = None, col_delim: str = ',', overwrite: bool = False)[source]

Writes DataFrame to CSV file(s). File will contain a column for each column in the DataFrame. All CSV Files written by Arkouda include a header denoting data types of the columns. Unlike other file formats, CSV files store Strings as their UTF-8 format instead of storing bytes as uint(8).

Parameters:
  • path (str) – The filename prefix to be used for saving files. Files will have _LOCALE#### appended when they are written to disk.

  • index (bool, default=False) – If True, the index of the DataFrame will be written to the file as a column.

  • columns (list of str (Optional)) – Column names to assign when writing data.

  • col_delim (str, default=",") – Value to be used to separate columns within the file. Please be sure that the value used DOES NOT appear in your dataset.

  • overwrite (bool, default=False) – If True, any existing files matching your provided prefix_path will be overwritten. If False, an error will be returned if existing files are found.

Return type:

None

Raises:
  • ValueError – Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist.

  • RuntimeError – Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

  • TypeError – Raised if we receive an unknown arkouda_type returned from the server.

Notes

  • CSV format is not currently supported by load/load_all operations.

  • The column delimiter is expected to be the same for column names and data.

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (”\n”) at this time.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'csv_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_csv(my_path + "/my_data")
>>> df2 = DataFrame.read_csv(my_path + "/my_data" + "_LOCALE0000")
>>> display(df2)

A

B

0

1

3

1

2

4

to_hdf(path, index=False, columns=None, file_type='distribute')[source]

Save DataFrame to disk as hdf5, preserving column names.

Parameters:
  • path (str) – File path to save data.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (List, default = None) – List of columns to include in the file. If None, writes out all columns.

  • file_type (str (single | distribute), default=distribute) – Whether to save to a single file or distribute across Locales.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray.

Notes

This method saves one file per locale of the arkouda server. All files are prefixed by the path argument and suffixed by their locale number.

See also

to_parquet, load

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_hdf(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

A

B

0

1

3

1

2

4

to_markdown(mode='wt', index=True, tablefmt='grid', storage_options=None, **kwargs)[source]

Print DataFrame in Markdown-friendly format.

Parameters:
  • mode (str, optional) – Mode in which file is opened, “wt” by default.

  • index (bool, optional, default True) – Add index (row) labels.

  • tablefmt (str = "grid") – Table format to call from tablulate: https://pypi.org/project/tabulate/

  • storage_options (dict, optional) – Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. An error will be raised if providing this argument with a non-fsspec URL. See the fsspec and backend storage implementation docs for the set of allowed keys and values.

  • **kwargs – These parameters will be passed to tabulate.

Note

This function should only be called on small DataFrames as it calls pandas.DataFrame.to_markdown: https://pandas.pydata.org/pandas-docs/version/1.2.4/reference/api/pandas.DataFrame.to_markdown.html

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"animal_1": ["elk", "pig"], "animal_2": ["dog", "quetzal"]})
>>> print(df.to_markdown())
+----+------------+------------+
|    | animal_1   | animal_2   |
+====+============+============+
|  0 | elk        | dog        |
+----+------------+------------+
|  1 | pig        | quetzal    |
+----+------------+------------+

Suppress the index:

>>> print(df.to_markdown(index = False))
+------------+------------+
| animal_1   | animal_2   |
+============+============+
| elk        | dog        |
+------------+------------+
| pig        | quetzal    |
+------------+------------+
to_pandas(datalimit=maxTransferBytes, retain_index=False)[source]

Send this DataFrame to a pandas DataFrame.

Parameters:
  • datalimit (int, default=arkouda.client.maxTransferBytes) – The maximum number size, in megabytes to transfer. The requested DataFrame will be converted to a pandas DataFrame only if the estimated size of the DataFrame does not exceed this value.

  • retain_index (bool, default=False) – Normally, to_pandas() creates a new range index object. If you want to keep the index column, set this to True.

Returns:

The result of converting this DataFrame to a pandas DataFrame.

Return type:

pandas.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> ak_df = ak.DataFrame({"A": ak.arange(2), "B": -1 * ak.arange(2)})
>>> type(ak_df)
arkouda.dataframe.DataFrame
>>> display(ak_df)

A

B

0

0

0

1

1

-1

>>> import pandas as pd
>>> pd_df = ak_df.to_pandas()
>>> type(pd_df)
pandas.core.frame.DataFrame
>>> display(pd_df)

A

B

0

0

0

1

1

-1

to_parquet(path, index=False, columns=None, compression: str | None = None, convert_categoricals: bool = False)[source]

Save DataFrame to disk as parquet, preserving column names.

Parameters:
  • path (str) – File path to save data.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (list) – List of columns to include in the file. If None, writes out all columns.

  • compression (str (Optional), default=None) – Provide the compression type to use when writing the file. Supported values: snappy, gzip, brotli, zstd, lz4

  • convert_categoricals (bool, default=False) – Parquet requires all columns to be the same size and Categoricals don’t satisfy that requirement. If set, write the equivalent Strings in place of any Categorical columns.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

This method saves one file per locale of the arkouda server. All files are prefixed by the path argument and suffixed by their locale number.

See also

to_hdf, load

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'parquet_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_parquet(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

B

A

0

3

1

1

4

2

transfer(hostname, port)[source]

Sends a DataFrame to a different Arkouda server.

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the DataFrame is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Returns:

A message indicating a complete transfer.

Return type:

str

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unregister()[source]

Unregister this DataFrame object in the arkouda server which was previously registered using register() and/or attached to using attach().

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister.

Notes

Objects registered with the server are immune to deletion until they are unregistered.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
static unregister_dataframe_by_name(user_defined_name: str) str[source]

Function to unregister DataFrame object by name which was registered with the arkouda server via register().

Parameters:

user_defined_name (str) – Name under which the DataFrame object was registered.

Raises:
  • TypeError – If user_defined_name is not a string.

  • RegistrationError – If there is an issue attempting to unregister any underlying components.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister_dataframe_by_name(