arkouda

Subpackages

Submodules

Package Contents

Classes

ArrayView

A multi-dimensional view of a pdarray. Arkouda ArraryView behaves similarly to numpy's ndarray.

BitVector

Represent integers as bit vectors, e.g. a set of flags.

CachedAccessor

Custom property-like object.

Categorical

Represents an array of values belonging to named categories. Converting a

Categorical

Represents an array of values belonging to named categories. Converting a

Categorical

Represents an array of values belonging to named categories. Converting a

DataFrame

A DataFrame structure based on arkouda arrays.

DataFrame

A DataFrame structure based on arkouda arrays.

Datetime

Represents a date and/or time.

Datetime

Represents a date and/or time.

Datetime

Represents a date and/or time.

DatetimeAccessor

DiffAggregate

A column in a GroupBy that has been differenced.

ErrorMode

Generic enumeration.

Fields

An integer-backed representation of a set of named binary fields, e.g. flags.

Generator

Generator exposes a number of methods for generating random

GroupBy

Group an array or list of arrays by value, usually in preparation

GroupBy

Group an array or list of arrays by value, usually in preparation

GroupBy

Group an array or list of arrays by value, usually in preparation

GroupBy

Group an array or list of arrays by value, usually in preparation

GroupBy

Group an array or list of arrays by value, usually in preparation

GroupBy

Group an array or list of arrays by value, usually in preparation

IPv4

Represent integers as IPv4 addresses.

Index

LogLevel

Generic enumeration.

MultiIndex

Power_divergenceResult

The results of a power divergence statistical test.

Properties

Row

This class is useful for printing and working with individual rows of a

SegArray

Series

One-dimensional arkouda array with axis labels.

StringAccessor

Timedelta

Represents a duration, the difference between two dates or times.

Timedelta

Represents a duration, the difference between two dates or times.

pdarray

The basic arkouda array class. This class contains only the

pdarray

The basic arkouda array class. This class contains only the

pdarray

The basic arkouda array class. This class contains only the

pdarray

The basic arkouda array class. This class contains only the

pdarray

The basic arkouda array class. This class contains only the

pdarray

The basic arkouda array class. This class contains only the

Functions

BitVectorizer([width, reverse])

Make a callback (i.e. function) that can be called on an

abs(→ arkouda.pdarrayclass.pdarray)

Return the element-wise absolute value of the array.

akabs(→ arkouda.pdarrayclass.pdarray)

Return the element-wise absolute value of the array.

akcast(→ Union[Union[arkouda.pdarrayclass.pdarray, ...)

Cast an array to another dtype.

akcast(→ Union[Union[arkouda.pdarrayclass.pdarray, ...)

Cast an array to another dtype.

align(*args)

Map multiple arrays of sparse identifiers to a common 0-up index.

all(→ numpy.bool_)

Return True iff all elements of the array evaluate to True.

any(→ numpy.bool_)

Return True iff any element of the array evaluates to True.

arange(→ arkouda.pdarrayclass.pdarray)

arange([start,] stop[, stride,] dtype=int64)

arange(→ arkouda.pdarrayclass.pdarray)

arange([start,] stop[, stride,] dtype=int64)

arange(→ arkouda.pdarrayclass.pdarray)

arange([start,] stop[, stride,] dtype=int64)

arange(→ arkouda.pdarrayclass.pdarray)

arange([start,] stop[, stride,] dtype=int64)

arange(→ arkouda.pdarrayclass.pdarray)

arange([start,] stop[, stride,] dtype=int64)

arange(→ arkouda.pdarrayclass.pdarray)

arange([start,] stop[, stride,] dtype=int64)

arccos(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse cosine of the array. The result is between 0 and pi.

arccosh(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse hyperbolic cosine of the array.

arcsin(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse sine of the array. The result is between -pi/2 and pi/2.

arcsinh(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse hyperbolic sine of the array.

arctan(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse tangent of the array. The result is between -pi/2 and pi/2.

arctan2(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse tangent of the array pair. The result chosen is the

arctanh(→ arkouda.pdarrayclass.pdarray)

Return the element-wise inverse hyperbolic tangent of the array.

argmax(→ Union[numpy.int64, numpy.uint64])

Return the index of the first occurrence of the array max value.

argmaxk(→ pdarray)

Find the indices corresponding to the k maximum values of an array.

argmin(→ Union[numpy.int64, numpy.uint64])

Return the index of the first occurrence of the array min value.

argmink(→ pdarray)

Finds the indices corresponding to the k minimum values of an array.

argsort(→ arkouda.pdarrayclass.pdarray)

Return the permutation that sorts the array.

argsort(→ arkouda.pdarrayclass.pdarray)

Return the permutation that sorts the array.

argsort(→ arkouda.pdarrayclass.pdarray)

Return the permutation that sorts the array.

array(→ Union[arkouda.pdarrayclass.pdarray, ...)

Convert a Python or Numpy Iterable to a pdarray or Strings object, sending

array(→ Union[arkouda.pdarrayclass.pdarray, ...)

Convert a Python or Numpy Iterable to a pdarray or Strings object, sending

array(→ Union[arkouda.pdarrayclass.pdarray, ...)

Convert a Python or Numpy Iterable to a pdarray or Strings object, sending

array(→ Union[arkouda.pdarrayclass.pdarray, ...)

Convert a Python or Numpy Iterable to a pdarray or Strings object, sending

attach(name)

attach_all(names)

Attach to all objects registered with the names provide

attach_pdarray(→ pdarray)

class method to return a pdarray attached to the registered name in the arkouda

bigint_from_uint_arrays(arrays[, max_bits])

Create a bigint pdarray from an iterable of uint pdarrays.

broadcast(segments, values[, size, permutation])

Broadcast a dense column vector to the rows of a sparse matrix or grouped array.

broadcast(segments, values[, size, permutation])

Broadcast a dense column vector to the rows of a sparse matrix or grouped array.

broadcast(segments, values[, size, permutation])

Broadcast a dense column vector to the rows of a sparse matrix or grouped array.

broadcast(segments, values[, size, permutation])

Broadcast a dense column vector to the rows of a sparse matrix or grouped array.

broadcast_dims(→ Tuple[int, Ellipsis])

Algorithm to determine shape of broadcasted PD array given two array shapes

broadcast_to_shape(→ pdarray)

expand an array's rank to the specified shape using broadcasting

cast(→ Union[Union[arkouda.pdarrayclass.pdarray, ...)

Cast an array to another dtype.

cast(→ Union[Union[arkouda.pdarrayclass.pdarray, ...)

Cast an array to another dtype.

ceil(→ arkouda.pdarrayclass.pdarray)

Return the element-wise ceiling of the array.

check_np_dtype(→ None)

Assert that numpy dtype dt is one of the dtypes supported

chisquare(f_obs[, f_exp, ddof])

Computes the chi square statistic and p-value.

clear(→ None)

Send a clear message to clear all unregistered data from the server symbol table

clip(→ arkouda.pdarrayclass.pdarray)

Clip (limit) the values in an array to a given range [lo,hi]

clz(→ pdarray)

Count leading zeros for each integer in an array.

coargsort(→ arkouda.pdarrayclass.pdarray)

Return the permutation that groups the rows (left-to-right), if the

coargsort(→ arkouda.pdarrayclass.pdarray)

Return the permutation that groups the rows (left-to-right), if the

coargsort(→ arkouda.pdarrayclass.pdarray)

Return the permutation that groups the rows (left-to-right), if the

compute_join_size(→ Tuple[int, int])

Compute the internal size of a hypothetical join between a and b. Returns

concatenate(→ Union[arkouda.pdarrayclass.pdarray, ...)

Concatenate a list or tuple of pdarray or Strings objects into

concatenate(→ Union[arkouda.pdarrayclass.pdarray, ...)

Concatenate a list or tuple of pdarray or Strings objects into

concatenate(→ Union[arkouda.pdarrayclass.pdarray, ...)

Concatenate a list or tuple of pdarray or Strings objects into

convert_if_categorical(values)

Convert a Categorical array to Strings for display

corr(→ numpy.float64)

Return the correlation between x and y

cos(→ arkouda.pdarrayclass.pdarray)

Return the element-wise cosine of the array.

cosh(→ arkouda.pdarrayclass.pdarray)

Return the element-wise hyperbolic cosine of the array.

cov(→ numpy.float64)

Return the covariance of x and y

create_pdarray(→ pdarray)

Return a pdarray instance pointing to an array created by the arkouda server.

create_pdarray(→ pdarray)

Return a pdarray instance pointing to an array created by the arkouda server.

create_pdarray(→ pdarray)

Return a pdarray instance pointing to an array created by the arkouda server.

create_pdarray(→ pdarray)

Return a pdarray instance pointing to an array created by the arkouda server.

ctz(→ pdarray)

Count trailing zeros for each integer in an array.

cumprod(→ arkouda.pdarrayclass.pdarray)

Return the cumulative product over the array.

cumsum(→ arkouda.pdarrayclass.pdarray)

Return the cumulative sum over the array.

cumsum(→ arkouda.pdarrayclass.pdarray)

Return the cumulative sum over the array.

date_operators(cls)

date_range([start, end, periods, freq, tz, normalize, ...])

Creates a fixed frequency Datetime range. Alias for

date_range([start, end, periods, freq, tz, normalize, ...])

Creates a fixed frequency Datetime range. Alias for

deg2rad(→ arkouda.pdarrayclass.pdarray)

Converts angles element-wise from degrees to radians.

disableVerbose(→ None)

Disables verbose logging (DEBUG log level) for all ArkoudaLoggers, setting

divmod(→ Tuple[pdarray, pdarray])

param x:

The dividend array, the values that will be the numerator of the floordivision and will be

dot(→ Union[numpy.int64, numpy.float64, numpy.uint64, ...)

Returns the sum of the elementwise product of two arrays of the same size (the dot product) or

dtype(x)

enableVerbose(→ None)

Enables verbose logging (DEBUG log level) for all ArkoudaLoggers

exp(→ arkouda.pdarrayclass.pdarray)

Return the element-wise exponential of the array.

expm1(→ arkouda.pdarrayclass.pdarray)

Return the element-wise exponential of the array minus one.

export(read_path[, dataset_name, write_file, ...])

Export data from Arkouda file (Parquet/HDF5) to Pandas object or file formatted to be

find(query, space)

Return indices of query items in a search list of items (-1 if not found).

floor(→ arkouda.pdarrayclass.pdarray)

Return the element-wise floor of the array.

fmod(→ pdarray)

Returns the element-wise remainder of division.

from_series(→ Union[arkouda.pdarrayclass.pdarray, ...)

Converts a Pandas Series to an Arkouda pdarray or Strings object. If

from_series(→ Union[arkouda.pdarrayclass.pdarray, ...)

Converts a Pandas Series to an Arkouda pdarray or Strings object. If

full(→ Union[arkouda.pdarrayclass.pdarray, ...)

Create a pdarray filled with fill_value.

full(→ Union[arkouda.pdarrayclass.pdarray, ...)

Create a pdarray filled with fill_value.

full_like(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with fill_value of the same size and dtype as an existing

gen_ranges(starts, ends[, stride, return_lengths])

Generate a segmented array of variable-length, contiguous ranges between pairs of

gen_ranges(starts, ends[, stride, return_lengths])

Generate a segmented array of variable-length, contiguous ranges between pairs of

generic_concat(items[, ordered])

getArkoudaLogger(→ ArkoudaLogger)

A convenience method for instantiating an ArkoudaLogger that retrieves the

get_byteorder(→ str)

Get a concrete byteorder (turns '=' into '<' or '>')

get_callback(x)

get_columns(→ List[str])

Get a list of column names from CSV file(s).

get_datasets(→ List[str])

Get the names of the datasets in the provide files

get_filetype(→ str)

Get the type of a file accessible to the server. Supported

get_null_indices(→ Union[arkouda.pdarrayclass.pdarray, ...)

Get null indices of a string column in a Parquet file.

get_server_byteorder(→ str)

Get the server's byteorder

hash(→ Union[Tuple[arkouda.pdarrayclass.pdarray, ...)

Return an element-wise hash of the array or list of arrays.

hist_all(ak_df[, cols])

Create a grid plot histogramming all numeric columns in ak dataframe

histogram(→ Tuple[arkouda.pdarrayclass.pdarray, ...)

Compute a histogram of evenly spaced bins over the range of an array.

histogram(→ Tuple[arkouda.pdarrayclass.pdarray, ...)

Compute a histogram of evenly spaced bins over the range of an array.

histogram2d(→ Tuple[arkouda.pdarrayclass.pdarray, ...)

Compute the bi-dimensional histogram of two data samples with evenly spaced bins

histogramdd(→ Tuple[arkouda.pdarrayclass.pdarray, ...)

Compute the multidimensional histogram of data in sample with evenly spaced bins.

import_data(read_path[, write_file, return_obj, index])

Import data from a file saved by Pandas (HDF5/Parquet) to Arkouda object and/or

in1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Test whether each element of a 1-D array is also present in a second array.

in1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Test whether each element of a 1-D array is also present in a second array.

in1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Test whether each element of a 1-D array is also present in a second array.

in1d_intervals(vals, intervals[, symmetric])

Test each value for membership in any of a set of half-open (pythonic)

indexof1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Returns an integer array of the index values where the values of the first

information(→ str)

Returns JSON formatted string containing information about the objects in names

intersect(a, b[, positions, unique])

Find the intersection of two arkouda arrays.

intersect1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Find the intersection of two arrays.

interval_lookup(keys, values, arguments[, fillvalue, ...])

Apply a function defined over intervals to an array of arguments.

intx(a, b)

Find all the rows that are in both dataframes.

invert_permutation(perm)

Find the inverse of a permutation array.

ip_address(values)

Convert values to an Arkouda array of IP addresses.

isSupportedInt(num)

isSupportedInt(num)

isSupportedInt(num)

isSupportedNumber(num)

is_cosorted(arrays)

Return True iff the arrays are cosorted, i.e., if the arrays were columns in a table

is_ipv4(→ arkouda.pdarrayclass.pdarray)

Indicate which values are ipv4 when passed data containing IPv4 and IPv6 values.

is_ipv6(→ arkouda.pdarrayclass.pdarray)

Indicate which values are ipv6 when passed data containing IPv4 and IPv6 values.

is_registered(→ bool)

Determine if the name provided is associated with a registered Object

is_sorted(→ numpy.bool_)

Return True iff the array is monotonically non-decreasing.

is_sorted(→ numpy.bool_)

Return True iff the array is monotonically non-decreasing.

isfinite(→ arkouda.pdarrayclass.pdarray)

Return the element-wise isfinite check applied to the array.

isinf(→ arkouda.pdarrayclass.pdarray)

Return the element-wise isinf check applied to the array.

isnan(→ arkouda.pdarrayclass.pdarray)

Return the element-wise isnan check applied to the array.

isnan(→ arkouda.pdarrayclass.pdarray)

Return the element-wise isnan check applied to the array.

join_on_eq_with_dt(...)

Performs an inner-join on equality between two integer arrays where

left_align(left, right)

Map two arrays of sparse identifiers to the 0-up index set implied by the left array,

linspace(→ arkouda.pdarrayclass.pdarray)

Create a pdarray of linearly-spaced floats in a closed interval.

list_registry([detailed])

Return a list containing the names of all registered objects

list_symbol_table(→ List[str])

Return a list containing the names of all objects in the symbol table

load(→ Union[arkouda.pdarrayclass.pdarray, ...)

Load a pdarray previously saved with pdarray.save().

load_all(→ Mapping[str, ...)

Load multiple pdarrays, Strings, SegArrays, or Categoricals previously

log(→ arkouda.pdarrayclass.pdarray)

Return the element-wise natural log of the array.

log10(→ arkouda.pdarrayclass.pdarray)

Return the element-wise base 10 log of the array.

log1p(→ arkouda.pdarrayclass.pdarray)

Return the element-wise natural log of one plus the array.

log2(→ arkouda.pdarrayclass.pdarray)

Return the element-wise base 2 log of the array.

lookup(keys, values, arguments[, fillvalue])

Apply the function defined by the mapping keys --> values to arguments.

ls(→ List[str])

This function calls the h5ls utility on a HDF5 file visible to the

ls_csv(→ List[str])

Used for identifying the datasets within a file when a CSV does not

max(→ arkouda.dtypes.numpy_scalars)

Return the maximum value of the array.

maxk(→ pdarray)

Find the k maximum values of an array.

mean(→ numpy.float64)

Return the mean of the array.

merge(→ DataFrame)

Merge Arkouda DataFrames with a database-style join.

min(→ arkouda.dtypes.numpy_scalars)

Return the minimum value of the array.

mink(→ pdarray)

Find the k minimum values of an array.

mod(→ pdarray)

Returns the element-wise remainder of division.

ones(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with ones.

ones(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with ones.

ones(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with ones.

ones(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with ones.

ones_like(→ arkouda.pdarrayclass.pdarray)

Create a one-filled pdarray of the same size and dtype as an existing

parity(→ pdarray)

Find the bit parity (XOR of all bits) for each integer in an array.

plot_dist(b, h[, log, xlabel, newfig])

Plot the distribution and cumulative distribution of histogram Data

popcount(→ pdarray)

Find the population (number of bits set) for each integer in an array.

power(→ pdarray)

Raises an array to a power. If where is given, the operation will only take place in the positions

power_divergence(f_obs[, f_exp, ddof, lambda_])

Computes the power divergence statistic and p-value.

pretty_print_information(→ None)

Prints verbose information for each object in names in a human readable format

prod(→ numpy.float64)

Return the product of all elements in the array. Return value is

rad2deg(→ arkouda.pdarrayclass.pdarray)

Converts angles element-wise from radians to degrees.

randint(→ arkouda.pdarrayclass.pdarray)

Generate a pdarray of randomized int, float, or bool values in a

randint(→ arkouda.pdarrayclass.pdarray)

Generate a pdarray of randomized int, float, or bool values in a

random_strings_lognormal(→ arkouda.strings.Strings)

Generate random strings with log-normally distributed lengths and

random_strings_uniform(→ arkouda.strings.Strings)

Generate random strings with lengths uniformly distributed between

read(→ Union[arkouda.pdarrayclass.pdarray, ...)

Read datasets from files.

read_csv(→ Union[arkouda.pdarrayclass.pdarray, ...)

Read CSV file(s) into Arkouda objects. If more than one dataset is found, the objects

read_hdf(→ Union[arkouda.pdarrayclass.pdarray, ...)

Read Arkouda objects from HDF5 file/s

read_parquet(→ Union[arkouda.pdarrayclass.pdarray, ...)

Read Arkouda objects from Parquet file/s

read_tagged_data(filenames[, datasets, strictTypes, ...])

Read datasets from files and tag each record to the file it was read from.

receive(hostname, port)

Receive a pdarray sent by pdarray.transfer().

receive_dataframe(hostname, port)

Receive a pdarray sent by dataframe.transfer().

register_all(data)

Register all objects in the provided dictionary

resolve_scalar_dtype(→ str)

Try to infer what dtype arkouda_server should treat val as.

restore(filename)

Return data saved using ak.snapshot

right_align(left, right)

Map two arrays of sparse values to the 0-up index set implied by the right array,

rotl(→ pdarray)

Rotate bits of <x> to the left by <rot>.

rotr(→ pdarray)

Rotate bits of <x> to the left by <rot>.

round(→ arkouda.pdarrayclass.pdarray)

Return the element-wise rounding of the array.

save_all(→ None)

DEPRECATED

search_intervals(vals, intervals[, tiebreak, hierarchical])

Given an array of query vals and non-overlapping, closed intervals, return

segarray(segments, values[, lengths, grouping])

Alias for the from_parts function. Prevents user from needing to call ak.SegArray constructor

setdiff1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Find the set difference of two arrays.

setxor1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Find the set exclusive-or (symmetric difference) of two arrays.

sign(→ arkouda.pdarrayclass.pdarray)

Return the element-wise sign of the array.

sin(→ arkouda.pdarrayclass.pdarray)

Return the element-wise sine of the array.

sinh(→ arkouda.pdarrayclass.pdarray)

Return the element-wise hyperbolic sine of the array.

skew(→ numpy.float64)

Computes the sample skewness of an array.

snapshot(filename)

Create a snapshot of the current Arkouda namespace. All currently accessible variables containing

sort(→ arkouda.pdarrayclass.pdarray)

Return a sorted copy of the array. Only sorts numeric arrays;

sqrt(→ pdarray)

Takes the square root of array. If where is given, the operation will only take place in

square(→ arkouda.pdarrayclass.pdarray)

Return the element-wise square of the array.

standard_normal(→ arkouda.pdarrayclass.pdarray)

Draw real numbers from the standard normal distribution.

standard_normal(→ arkouda.pdarrayclass.pdarray)

Draw real numbers from the standard normal distribution.

std(→ numpy.float64)

Return the standard deviation of values in the array. The standard

string_operators(cls)

sum(→ numpy.float64)

Return the sum of all elements in the array.

tan(→ arkouda.pdarrayclass.pdarray)

Return the element-wise tangent of the array.

tanh(→ arkouda.pdarrayclass.pdarray)

Return the element-wise hyperbolic tangent of the array.

timedelta_range([start, end, periods, freq, name, closed])

Return a fixed frequency TimedeltaIndex, with day as the default

timedelta_range([start, end, periods, freq, name, closed])

Return a fixed frequency TimedeltaIndex, with day as the default

to_csv(columns, prefix_path[, names, col_delim, overwrite])

Write Arkouda object(s) to CSV file(s). All CSV Files written by Arkouda

to_hdf(→ None)

Save multiple named pdarrays to HDF5 files.

to_parquet(→ None)

Save multiple named pdarrays to Parquet files.

translate_np_dtype(→ Tuple[str, int])

Split numpy dtype dt into its kind and byte size, raising

trunc(→ arkouda.pdarrayclass.pdarray)

Return the element-wise truncation of the array.

uniform(, high, seed, ...)

Generate a pdarray with uniformly distributed random float values

uniform(, high, seed, ...)

Generate a pdarray with uniformly distributed random float values

union1d(→ Union[arkouda.pdarrayclass.pdarray, ...)

Find the union of two arrays/List of Arrays.

unique(→ Union[groupable, Tuple[groupable, ...)

Find the unique elements of an array.

unique(→ Union[groupable, Tuple[groupable, ...)

Find the unique elements of an array.

unique(→ Union[groupable, Tuple[groupable, ...)

Find the unique elements of an array.

unregister(→ str)

unregister_all(names)

Unregister all names provided

unregister_pdarray_by_name(→ None)

Unregister a named pdarray in the arkouda server which was previously

unsqueeze(p)

update_hdf(columns, prefix_path[, names, repack])

Overwrite the datasets with name appearing in names or keys in columns if columns

value_counts(→ Union[Categorical, ...)

Count the occurrences of the unique values of an array.

var(→ numpy.float64)

Return the variance of values in the array.

where(→ Union[arkouda.pdarrayclass.pdarray, ...)

Returns an array with elements chosen from A and B based upon a

where(→ Union[arkouda.pdarrayclass.pdarray, ...)

Returns an array with elements chosen from A and B based upon a

where(→ Union[arkouda.pdarrayclass.pdarray, ...)

Returns an array with elements chosen from A and B based upon a

write_log(log_msg[, tag, log_lvl])

Allows the user to write custom logs.

xlogy(x, y)

Computes x * log(y).

zero_up(vals)

Map an array of sparse values to 0-up indices.

zeros(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with zeros.

zeros(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with zeros.

zeros(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with zeros.

zeros(→ arkouda.pdarrayclass.pdarray)

Create a pdarray filled with zeros.

zeros_like(→ arkouda.pdarrayclass.pdarray)

Create a zero-filled pdarray of the same size and dtype as an existing

Attributes

arkouda.ARKOUDA_SUPPORTED_DTYPES
arkouda.AllSymbols = '__AllSymbols__'
class arkouda.ArrayView(base: arkouda.pdarrayclass.pdarray, shape, order='row_major')[source]

A multi-dimensional view of a pdarray. Arkouda ArraryView behaves similarly to numpy’s ndarray. The base pdarray is stored in 1-dimension but can be indexed and treated logically as if it were multi-dimensional

base

The base pdarray that is being viewed as a multi-dimensional object

Type:

pdarray

dtype

The element type of the base pdarray (equivalent to base.dtype)

Type:

dtype

size

The number of elements in the base pdarray (equivalent to base.size)

Type:

int_scalars

shape

A pdarray specifying the sizes of each dimension of the array

Type:

pdarray[int]

ndim

Number of dimensions (equivalent to shape.size)

Type:

int_scalars

itemsize

The size in bytes of each element (equivalent to base.itemsize)

Type:

int_scalars

order

Index order to read and write the elements. By default or if ‘C’/’row_major’, read and write data in row_major order If ‘F’/’column_major’, read and write data in column_major order

Type:

str {‘C’/’row_major’ | ‘F’/’column_major’}

objType = 'ArrayView'
to_hdf(prefix_path: str, dataset: str = 'ArrayView', mode: str = 'truncate', file_type: str = 'distribute')[source]

Save the current ArrayView object to hdf5 file

Parameters:
  • prefix_path (str) – Path to the file to write the dataset to

  • dataset (str) – Name of the dataset to write

  • mode (str (truncate | append)) – Default: truncate Mode to write the dataset in. Truncate will overwrite any existing files. Append will add the dataset to an existing file.

  • file_type (str (single|distribute)) – Default: distribute Indicates the format to save the file. Single will store in a single file. Distribute will store the date in a file per locale.

to_list() list[source]

Convert the ArrayView to a list, transferring array data from the Arkouda server to client-side Python. Note: if the ArrayView size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A list with the same data as the ArrayView

Return type:

list

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the ArrayView size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

to_ndarray

Examples

>>> a = ak.arange(6).reshape(2,3)
>>> a.to_list()
[[0, 1, 2], [3, 4, 5]]
>>> type(a.to_list())
list
to_ndarray() numpy.ndarray[source]

Convert the ArrayView to a np.ndarray, transferring array data from the Arkouda server to client-side Python. Note: if the ArrayView size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A numpy ndarray with the same attributes and data as the ArrayView

Return type:

np.ndarray

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the ArrayView size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array, to_list

Examples

>>> a = ak.arange(6).reshape(2,3)
>>> a.to_ndarray()
array([[0, 1, 2],
       [3, 4, 5]])
>>> type(a.to_ndarray())
numpy.ndarray
update_hdf(prefix_path: str, dataset: str = 'ArrayView', repack: bool = True)[source]

Overwrite the dataset with the name provided with this array view object. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

str - success message if successful

Raises:

RuntimeError – Raised if a server-side error is thrown saving the array view

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

  • Because HDF5 deletes do not release memory, this will create a copy of the file with the new data

class arkouda.BitVector(values, width=64, reverse=False)[source]

Bases: arkouda.pdarrayclass.pdarray

Represent integers as bit vectors, e.g. a set of flags.

Parameters:
  • values (pdarray, int64) – The integers to represent as bit vectors

  • width (int) – The number of bit fields in the vector

  • reverse (bool) – If True, display bits from least significant (left) to most significant (right). By default, the most significant bit is the left-most bit.

Returns:

bitvectors – The array of binary vectors

Return type:

BitVector

Notes

This class is a thin wrapper around pdarray that mostly affects how values are displayed to the user. Operators and methods will typically treat this class like a uint64 pdarray.

conserves
special_objType = 'BitVector'
format(x)[source]

Format a single binary vector as a string.

classmethod from_return_msg(rep_msg)[source]
opeq(other, op)[source]
register(user_defined_name)[source]

Register this BitVector object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the BitVector is to be registered under, this will be the root name for underlying components

Returns:

The same BitVector which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different BitVectors with the same name.

Return type:

BitVector

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the BitVector with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

to_list()[source]

Export data to a list of string-formatted bit vectors.

to_ndarray()[source]

Export data to a numpy array of string-formatted bit vectors.

arkouda.BitVectorizer(width=64, reverse=False)[source]

Make a callback (i.e. function) that can be called on an array to create a BitVector.

Parameters:
  • width (int) – The number of bit fields in the vector

  • reverse (bool) – If True, display bits from least significant (left) to most significant (right). By default, the most significant bit is the left-most bit.

Returns:

bitvectorizer – A function that takes an array and returns a BitVector instance

Return type:

callable

class arkouda.CachedAccessor(name: str, accessor)[source]

Custom property-like object. A descriptor for caching accessors. :param name: Namespace that will be accessed under, e.g. df.foo. :type name: str :param accessor: Class with the extension methods. :type accessor: cls

Notes

For accessor, The class’s __init__ method assumes that one of Series, DataFrame or Index as the single argument data.

class arkouda.Categorical(values, **kwargs)[source]

Represents an array of values belonging to named categories. Converting a Strings object to Categorical often saves memory and speeds up operations, especially if there are many repeated values, at the cost of some one-time work in initialization.

Parameters:
  • values (Strings) – String values to convert to categories

  • NAvalue (str scalar) – The value to use to represent missing/null data

categories

The set of category labels (determined automatically)

Type:

Strings

codes

The category indices of the values or -1 for N/A

Type:

pdarray, int64

permutation

The permutation that groups the values in the same order as categories

Type:

pdarray, int64

segments

When values are grouped, the starting offset of each group

Type:

pdarray, int64

size

The number of items in the array

Type:

Union[int,np.int64]

nlevels

The number of distinct categories

Type:

Union[int,np.int64]

ndim

The rank of the array (currently only rank 1 arrays supported)

Type:

Union[int,np.int64]

shape

The sizes of each dimension of the array

Type:

tuple

property nbytes

The size of the Categorical in bytes.

Returns:

The size of the Categorical in bytes.

Return type:

int

BinOps
RegisterablePieces
RequiredPieces
dtype
objType = 'Categorical'
permutation
segments
argsort()[source]
static attach(user_defined_name: str) Categorical[source]

DEPRECATED Function to return a Categorical object attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which Categorical object was registered under

Returns:

The Categorical object created by re-attaching to the corresponding server components

Return type:

Categorical

Raises:

TypeError – if user_defined_name is not a string

concatenate(others: Sequence[Categorical], ordered: bool = True) Categorical[source]

Merge this Categorical with other Categorical objects in the array, concatenating the arrays and synchronizing the categories.

Parameters:
  • others (Sequence[Categorical]) – The Categorical arrays to concatenate and merge with this one

  • ordered (bool) – If True (default), the arrays will be appended in the order given. If False, array data may be interleaved in blocks, which can greatly improve performance but results in non-deterministic ordering of elements.

Returns:

The merged Categorical object

Return type:

Categorical

Raises:

TypeError – Raised if any others array objects are not Categorical objects

Notes

This operation can be expensive – slower than concatenating Strings.

contains(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element contains the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that contain substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

endswith(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element ends with the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that end with substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

classmethod from_codes(codes: arkouda.pdarrayclass.pdarray, categories: arkouda.strings.Strings, permutation=None, segments=None, **kwargs) Categorical[source]

Make a Categorical from codes and categories arrays. If codes and categories have already been pre-computed, this constructor saves time. If not, please use the normal constructor.

Parameters:
  • codes (pdarray, int64) – Category indices of each value

  • categories (Strings) – Unique category labels

  • permutation (pdarray, int64) – The permutation that groups the values in the same order as categories

  • segments (pdarray, int64) – When values are grouped, the starting offset of each group

Returns:

The Categorical object created from the input parameters

Return type:

Categorical

Raises:

TypeError – Raised if codes is not a pdarray of int64 objects or if categories is not a Strings object

classmethod from_return_msg(rep_msg) Categorical[source]

Create categorical from return message from server

Notes

This is currently only used when reading a Categorical from HDF5 files.

group() arkouda.pdarrayclass.pdarray[source]

Return the permutation that groups the array, placing equivalent categories together. All instances of the same category are guaranteed to lie in one contiguous block of the permuted array, but the blocks are not necessarily ordered.

Returns:

The permutation that groups the array by value

Return type:

pdarray

See also

GroupBy, unique

Notes

This method is faster than the corresponding Strings method. If the Categorical was created from a Strings object, then this function simply returns the cached permutation. Even if the Categorical was created using from_codes(), this function will be faster than Strings.group() because it sorts dense integer values, rather than 128-bit hash values.

hash() Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray][source]

Compute a 128-bit hash of each element of the Categorical.

Returns:

A tuple of two int64 pdarrays. The ith hash value is the concatenation of the ith values from each array.

Return type:

Tuple[pdarray,pdarray]

Notes

The implementation uses SipHash128, a fast and balanced hash function (used by Python for dictionaries and sets). For realistic numbers of strings (up to about 10**15), the probability of a collision between two 128-bit hash values is negligible.

in1d(test: arkouda.strings.Strings | Categorical) arkouda.pdarrayclass.pdarray[source]

Test whether each element of the Categorical object is also present in the test Strings or Categorical object.

Returns a boolean array the same length as self that is True where an element of self is in test and False otherwise.

Parameters:

test (Union[Strings,Categorical]) – The values against which to test each value of ‘self`.

Returns:

The values self[in1d] are in the test Strings or Categorical object.

Return type:

pdarray, bool

Raises:

TypeError – Raised if test is not a Strings or Categorical object

Notes

in1d can be considered as an element-wise function version of the python keyword in, for 1-D sequences. in1d(a, b) is logically equivalent to ak.array([item in b for item in a]), but is much faster and scales to arbitrarily large a.

Examples

>>> strings = ak.array([f'String {i}' for i in range(0,5)])
>>> cat = ak.Categorical(strings)
>>> ak.in1d(cat,strings)
array([True, True, True, True, True])
>>> strings = ak.array([f'String {i}' for i in range(5,9)])
>>> catTwo = ak.Categorical(strings)
>>> ak.in1d(cat,catTwo)
array([False, False, False, False, False])
info() str[source]

Returns a JSON formatted string containing information about all components of self

Parameters:

None

Returns:

JSON string containing information about all components of self

Return type:

str

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

isna()[source]

Find where values are missing or null (as defined by self.NAvalue)

static parse_hdf_categoricals(d: Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings]) Tuple[List[str], Dict[str, Categorical]][source]

This function should be used in conjunction with the load_all function which reads hdf5 files and reconstitutes Categorical objects. Categorical objects use a naming convention and HDF5 structure so they can be identified and constructed for the user.

In general you should not call this method directly

Parameters:

d (Dictionary of String to either Pdarray or Strings object)

Returns:

  • 2-Tuple of List of strings containing key names which should be removed and Dictionary of

  • base name to Categorical object

pretty_print_info() None[source]

Prints information about all components of self in a human readable format

Parameters:

None

Return type:

None

register(user_defined_name: str) Categorical[source]

Register this Categorical object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the Categorical is to be registered under, this will be the root name for underlying components

Returns:

The same Categorical which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Categoricals with the same name.

Return type:

Categorical

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the Categorical with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

reset_categories() Categorical[source]

Recompute the category labels, discarding any unused labels. This method is often useful after slicing or indexing a Categorical array, when the resulting array only contains a subset of the original categories. In this case, eliminating unused categories can speed up other operations.

Returns:

A Categorical object generated from the current instance

Return type:

Categorical

save(prefix_path: str, dataset: str = 'categorical_array', file_format: str = 'HDF5', mode: str = 'truncate', file_type: str = 'distribute', compression: str | None = None) str[source]

DEPRECATED Save the Categorical object to HDF5 or Parquet. The result is a collection of HDF5/Parquet files, one file per locale of the arkouda server, where each filename starts with prefix_path and dataset. Each locale saves its chunk of the Strings array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in HDF5 files (must not already exist) :type dataset: str :param file_format: The format to save the file to. :type file_format: str {‘HDF5 | ‘Parquet’} :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, create a new Categorical dataset within existing files.

Parameters:
  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

  • compression (str (Optional)) – {None | ‘snappy’ | ‘gzip’ | ‘brotli’ | ‘zstd’ | ‘lz4’} The compression type to use when writing. This is only supported for Parquet files and will not be used with HDF5.

Return type:

String message indicating result of save operation

Raises:
  • ValueError – Raised if the lengths of columns and values differ, or the mode is neither ‘truncate’ nor ‘append’

  • TypeError – Raised if prefix_path, dataset, or mode is not a str

Notes

Important implementation notes: (1) Strings state is saved as two datasets within an hdf5 group: one for the string characters and one for the segments corresponding to the start of each string, (2) the hdf5 group is named via the dataset parameter.

See also

-, -

set_categories(new_categories, NAvalue=None)[source]

Set categories to user-defined values.

Parameters:
  • new_categories (Strings) – The array of new categories to use. Must be unique.

  • NAvalue (str scalar) – The value to use to represent missing/null data

Returns:

A new Categorical with the user-defined categories. Old values present in new categories will appear unchanged. Old values not present will be assigned the NA value.

Return type:

Categorical

sort()[source]
classmethod standardize_categories(arrays, NAvalue='N/A')[source]

Standardize an array of Categoricals so that they share the same categories.

Parameters:
  • arrays (sequence of Categoricals) – The Categoricals to standardize

  • NAvalue (str scalar) – The value to use to represent missing/null data

Returns:

A list of the original Categoricals remapped to the shared categories.

Return type:

List of Categoricals

startswith(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element starts with the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that start with substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

to_hdf(prefix_path, dataset='categorical_array', mode='truncate', file_type='distribute')[source]

Save the Categorical to HDF5. The result is a collection of HDF5 files, one file per locale of the arkouda server, where each filename starts with prefix_path.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files will share

  • dataset (str) – Name prefix for saved data within the HDF5 file

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, add data as a new column to existing files.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale.

Return type:

None

See also

load

to_list() List[source]

Convert the Categorical to a list, transferring data from the arkouda server to Python. This conversion discards category information and produces a list of strings. If the arrays exceeds a built-in size limit, a RuntimeError is raised.

Returns:

A list of strings corresponding to the values in this Categorical

Return type:

list

Notes

The number of bytes in the Categorical cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but proceed with caution.

to_ndarray() numpy.ndarray[source]

Convert the array to a np.ndarray, transferring array data from the arkouda server to Python. This conversion discards category information and produces an ndarray of strings. If the arrays exceeds a built-in size limit, a RuntimeError is raised.

Returns:

A numpy ndarray of strings corresponding to the values in this array

Return type:

np.ndarray

Notes

The number of bytes in the array cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but proceed with caution.

to_parquet(prefix_path: str, dataset: str = 'categorical_array', mode: str = 'truncate', compression: str | None = None) str[source]

This functionality is currently not supported and will also raise a RuntimeError. Support is in development. Save the Categorical to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in HDF5 files (must not already exist)

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, create a new Categorical dataset within existing files.

  • compression (str (Optional)) – Default None Provide the compression type to use when writing the file. Supported values: snappy, gzip, brotli, zstd, lz4

Return type:

String message indicating result of save operation

Raises:

RuntimeError – On run due to compatability issues of Categorical with Parquet.

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - ‘append’ write mode is supported, but is not efficient. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

See also

to_hdf

to_strings() List[source]

Convert the Categorical to Strings.

Returns:

A Strings object corresponding to the values in this Categorical.

Return type:

arkouda.strings.Strings

Examples

>>> from arkouda import ak
>>> ak.connect()
>>> a = ak.array(["a","b","c"])
>>> a
>>> c = ak.Categorical(a)
>>>  c.to_strings()
c.to_strings()
>>> isinstance(c.to_strings(), ak.Strings)
True
transfer(hostname: str, port: arkouda.dtypes.int_scalars)[source]

Sends a Categorical object to a different Arkouda server

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the Categorical is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Return type:

A message indicating a complete transfer

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unique() Categorical[source]
unregister() None[source]

Unregister this Categorical object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

static unregister_categorical_by_name(user_defined_name: str) None[source]

Function to unregister Categorical object by name which was registered with the arkouda server via register()

Parameters:

user_defined_name (str) – Name under which the Categorical object was registered

Raises:
  • TypeError – if user_defined_name is not a string

  • RegistrationError – if there is an issue attempting to unregister any underlying components

update_hdf(prefix_path, dataset='categorical_array', repack=True)[source]

Overwrite the dataset with the name provided with this Categorical object. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the Categorical

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

  • Because HDF5 deletes do not release memory, the repack option allows for automatic creation of a file without the inaccessible data.

class arkouda.Categorical(values, **kwargs)[source]

Represents an array of values belonging to named categories. Converting a Strings object to Categorical often saves memory and speeds up operations, especially if there are many repeated values, at the cost of some one-time work in initialization.

Parameters:
  • values (Strings) – String values to convert to categories

  • NAvalue (str scalar) – The value to use to represent missing/null data

categories

The set of category labels (determined automatically)

Type:

Strings

codes

The category indices of the values or -1 for N/A

Type:

pdarray, int64

permutation

The permutation that groups the values in the same order as categories

Type:

pdarray, int64

segments

When values are grouped, the starting offset of each group

Type:

pdarray, int64

size

The number of items in the array

Type:

Union[int,np.int64]

nlevels

The number of distinct categories

Type:

Union[int,np.int64]

ndim

The rank of the array (currently only rank 1 arrays supported)

Type:

Union[int,np.int64]

shape

The sizes of each dimension of the array

Type:

tuple

property nbytes

The size of the Categorical in bytes.

Returns:

The size of the Categorical in bytes.

Return type:

int

BinOps
RegisterablePieces
RequiredPieces
dtype
objType = 'Categorical'
permutation
segments
argsort()[source]
static attach(user_defined_name: str) Categorical[source]

DEPRECATED Function to return a Categorical object attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which Categorical object was registered under

Returns:

The Categorical object created by re-attaching to the corresponding server components

Return type:

Categorical

Raises:

TypeError – if user_defined_name is not a string

concatenate(others: Sequence[Categorical], ordered: bool = True) Categorical[source]

Merge this Categorical with other Categorical objects in the array, concatenating the arrays and synchronizing the categories.

Parameters:
  • others (Sequence[Categorical]) – The Categorical arrays to concatenate and merge with this one

  • ordered (bool) – If True (default), the arrays will be appended in the order given. If False, array data may be interleaved in blocks, which can greatly improve performance but results in non-deterministic ordering of elements.

Returns:

The merged Categorical object

Return type:

Categorical

Raises:

TypeError – Raised if any others array objects are not Categorical objects

Notes

This operation can be expensive – slower than concatenating Strings.

contains(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element contains the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that contain substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

endswith(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element ends with the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that end with substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

classmethod from_codes(codes: arkouda.pdarrayclass.pdarray, categories: arkouda.strings.Strings, permutation=None, segments=None, **kwargs) Categorical[source]

Make a Categorical from codes and categories arrays. If codes and categories have already been pre-computed, this constructor saves time. If not, please use the normal constructor.

Parameters:
  • codes (pdarray, int64) – Category indices of each value

  • categories (Strings) – Unique category labels

  • permutation (pdarray, int64) – The permutation that groups the values in the same order as categories

  • segments (pdarray, int64) – When values are grouped, the starting offset of each group

Returns:

The Categorical object created from the input parameters

Return type:

Categorical

Raises:

TypeError – Raised if codes is not a pdarray of int64 objects or if categories is not a Strings object

classmethod from_return_msg(rep_msg) Categorical[source]

Create categorical from return message from server

Notes

This is currently only used when reading a Categorical from HDF5 files.

group() arkouda.pdarrayclass.pdarray[source]

Return the permutation that groups the array, placing equivalent categories together. All instances of the same category are guaranteed to lie in one contiguous block of the permuted array, but the blocks are not necessarily ordered.

Returns:

The permutation that groups the array by value

Return type:

pdarray

See also

GroupBy, unique

Notes

This method is faster than the corresponding Strings method. If the Categorical was created from a Strings object, then this function simply returns the cached permutation. Even if the Categorical was created using from_codes(), this function will be faster than Strings.group() because it sorts dense integer values, rather than 128-bit hash values.

hash() Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray][source]

Compute a 128-bit hash of each element of the Categorical.

Returns:

A tuple of two int64 pdarrays. The ith hash value is the concatenation of the ith values from each array.

Return type:

Tuple[pdarray,pdarray]

Notes

The implementation uses SipHash128, a fast and balanced hash function (used by Python for dictionaries and sets). For realistic numbers of strings (up to about 10**15), the probability of a collision between two 128-bit hash values is negligible.

in1d(test: arkouda.strings.Strings | Categorical) arkouda.pdarrayclass.pdarray[source]

Test whether each element of the Categorical object is also present in the test Strings or Categorical object.

Returns a boolean array the same length as self that is True where an element of self is in test and False otherwise.

Parameters:

test (Union[Strings,Categorical]) – The values against which to test each value of ‘self`.

Returns:

The values self[in1d] are in the test Strings or Categorical object.

Return type:

pdarray, bool

Raises:

TypeError – Raised if test is not a Strings or Categorical object

Notes

in1d can be considered as an element-wise function version of the python keyword in, for 1-D sequences. in1d(a, b) is logically equivalent to ak.array([item in b for item in a]), but is much faster and scales to arbitrarily large a.

Examples

>>> strings = ak.array([f'String {i}' for i in range(0,5)])
>>> cat = ak.Categorical(strings)
>>> ak.in1d(cat,strings)
array([True, True, True, True, True])
>>> strings = ak.array([f'String {i}' for i in range(5,9)])
>>> catTwo = ak.Categorical(strings)
>>> ak.in1d(cat,catTwo)
array([False, False, False, False, False])
info() str[source]

Returns a JSON formatted string containing information about all components of self

Parameters:

None

Returns:

JSON string containing information about all components of self

Return type:

str

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

isna()[source]

Find where values are missing or null (as defined by self.NAvalue)

static parse_hdf_categoricals(d: Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings]) Tuple[List[str], Dict[str, Categorical]][source]

This function should be used in conjunction with the load_all function which reads hdf5 files and reconstitutes Categorical objects. Categorical objects use a naming convention and HDF5 structure so they can be identified and constructed for the user.

In general you should not call this method directly

Parameters:

d (Dictionary of String to either Pdarray or Strings object)

Returns:

  • 2-Tuple of List of strings containing key names which should be removed and Dictionary of

  • base name to Categorical object

pretty_print_info() None[source]

Prints information about all components of self in a human readable format

Parameters:

None

Return type:

None

register(user_defined_name: str) Categorical[source]

Register this Categorical object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the Categorical is to be registered under, this will be the root name for underlying components

Returns:

The same Categorical which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Categoricals with the same name.

Return type:

Categorical

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the Categorical with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

reset_categories() Categorical[source]

Recompute the category labels, discarding any unused labels. This method is often useful after slicing or indexing a Categorical array, when the resulting array only contains a subset of the original categories. In this case, eliminating unused categories can speed up other operations.

Returns:

A Categorical object generated from the current instance

Return type:

Categorical

save(prefix_path: str, dataset: str = 'categorical_array', file_format: str = 'HDF5', mode: str = 'truncate', file_type: str = 'distribute', compression: str | None = None) str[source]

DEPRECATED Save the Categorical object to HDF5 or Parquet. The result is a collection of HDF5/Parquet files, one file per locale of the arkouda server, where each filename starts with prefix_path and dataset. Each locale saves its chunk of the Strings array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in HDF5 files (must not already exist) :type dataset: str :param file_format: The format to save the file to. :type file_format: str {‘HDF5 | ‘Parquet’} :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, create a new Categorical dataset within existing files.

Parameters:
  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

  • compression (str (Optional)) – {None | ‘snappy’ | ‘gzip’ | ‘brotli’ | ‘zstd’ | ‘lz4’} The compression type to use when writing. This is only supported for Parquet files and will not be used with HDF5.

Return type:

String message indicating result of save operation

Raises:
  • ValueError – Raised if the lengths of columns and values differ, or the mode is neither ‘truncate’ nor ‘append’

  • TypeError – Raised if prefix_path, dataset, or mode is not a str

Notes

Important implementation notes: (1) Strings state is saved as two datasets within an hdf5 group: one for the string characters and one for the segments corresponding to the start of each string, (2) the hdf5 group is named via the dataset parameter.

See also

-, -

set_categories(new_categories, NAvalue=None)[source]

Set categories to user-defined values.

Parameters:
  • new_categories (Strings) – The array of new categories to use. Must be unique.

  • NAvalue (str scalar) – The value to use to represent missing/null data

Returns:

A new Categorical with the user-defined categories. Old values present in new categories will appear unchanged. Old values not present will be assigned the NA value.

Return type:

Categorical

sort()[source]
classmethod standardize_categories(arrays, NAvalue='N/A')[source]

Standardize an array of Categoricals so that they share the same categories.

Parameters:
  • arrays (sequence of Categoricals) – The Categoricals to standardize

  • NAvalue (str scalar) – The value to use to represent missing/null data

Returns:

A list of the original Categoricals remapped to the shared categories.

Return type:

List of Categoricals

startswith(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element starts with the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that start with substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

to_hdf(prefix_path, dataset='categorical_array', mode='truncate', file_type='distribute')[source]

Save the Categorical to HDF5. The result is a collection of HDF5 files, one file per locale of the arkouda server, where each filename starts with prefix_path.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files will share

  • dataset (str) – Name prefix for saved data within the HDF5 file

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, add data as a new column to existing files.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale.

Return type:

None

See also

load

to_list() List[source]

Convert the Categorical to a list, transferring data from the arkouda server to Python. This conversion discards category information and produces a list of strings. If the arrays exceeds a built-in size limit, a RuntimeError is raised.

Returns:

A list of strings corresponding to the values in this Categorical

Return type:

list

Notes

The number of bytes in the Categorical cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but proceed with caution.

to_ndarray() numpy.ndarray[source]

Convert the array to a np.ndarray, transferring array data from the arkouda server to Python. This conversion discards category information and produces an ndarray of strings. If the arrays exceeds a built-in size limit, a RuntimeError is raised.

Returns:

A numpy ndarray of strings corresponding to the values in this array

Return type:

np.ndarray

Notes

The number of bytes in the array cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but proceed with caution.

to_parquet(prefix_path: str, dataset: str = 'categorical_array', mode: str = 'truncate', compression: str | None = None) str[source]

This functionality is currently not supported and will also raise a RuntimeError. Support is in development. Save the Categorical to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in HDF5 files (must not already exist)

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, create a new Categorical dataset within existing files.

  • compression (str (Optional)) – Default None Provide the compression type to use when writing the file. Supported values: snappy, gzip, brotli, zstd, lz4

Return type:

String message indicating result of save operation

Raises:

RuntimeError – On run due to compatability issues of Categorical with Parquet.

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - ‘append’ write mode is supported, but is not efficient. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

See also

to_hdf

to_strings() List[source]

Convert the Categorical to Strings.

Returns:

A Strings object corresponding to the values in this Categorical.

Return type:

arkouda.strings.Strings

Examples

>>> from arkouda import ak
>>> ak.connect()
>>> a = ak.array(["a","b","c"])
>>> a
>>> c = ak.Categorical(a)
>>>  c.to_strings()
c.to_strings()
>>> isinstance(c.to_strings(), ak.Strings)
True
transfer(hostname: str, port: arkouda.dtypes.int_scalars)[source]

Sends a Categorical object to a different Arkouda server

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the Categorical is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Return type:

A message indicating a complete transfer

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unique() Categorical[source]
unregister() None[source]

Unregister this Categorical object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

static unregister_categorical_by_name(user_defined_name: str) None[source]

Function to unregister Categorical object by name which was registered with the arkouda server via register()

Parameters:

user_defined_name (str) – Name under which the Categorical object was registered

Raises:
  • TypeError – if user_defined_name is not a string

  • RegistrationError – if there is an issue attempting to unregister any underlying components

update_hdf(prefix_path, dataset='categorical_array', repack=True)[source]

Overwrite the dataset with the name provided with this Categorical object. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the Categorical

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

  • Because HDF5 deletes do not release memory, the repack option allows for automatic creation of a file without the inaccessible data.

class arkouda.Categorical(values, **kwargs)[source]

Represents an array of values belonging to named categories. Converting a Strings object to Categorical often saves memory and speeds up operations, especially if there are many repeated values, at the cost of some one-time work in initialization.

Parameters:
  • values (Strings) – String values to convert to categories

  • NAvalue (str scalar) – The value to use to represent missing/null data

categories

The set of category labels (determined automatically)

Type:

Strings

codes

The category indices of the values or -1 for N/A

Type:

pdarray, int64

permutation

The permutation that groups the values in the same order as categories

Type:

pdarray, int64

segments

When values are grouped, the starting offset of each group

Type:

pdarray, int64

size

The number of items in the array

Type:

Union[int,np.int64]

nlevels

The number of distinct categories

Type:

Union[int,np.int64]

ndim

The rank of the array (currently only rank 1 arrays supported)

Type:

Union[int,np.int64]

shape

The sizes of each dimension of the array

Type:

tuple

property nbytes

The size of the Categorical in bytes.

Returns:

The size of the Categorical in bytes.

Return type:

int

BinOps
RegisterablePieces
RequiredPieces
dtype
objType = 'Categorical'
permutation
segments
argsort()[source]
static attach(user_defined_name: str) Categorical[source]

DEPRECATED Function to return a Categorical object attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which Categorical object was registered under

Returns:

The Categorical object created by re-attaching to the corresponding server components

Return type:

Categorical

Raises:

TypeError – if user_defined_name is not a string

concatenate(others: Sequence[Categorical], ordered: bool = True) Categorical[source]

Merge this Categorical with other Categorical objects in the array, concatenating the arrays and synchronizing the categories.

Parameters:
  • others (Sequence[Categorical]) – The Categorical arrays to concatenate and merge with this one

  • ordered (bool) – If True (default), the arrays will be appended in the order given. If False, array data may be interleaved in blocks, which can greatly improve performance but results in non-deterministic ordering of elements.

Returns:

The merged Categorical object

Return type:

Categorical

Raises:

TypeError – Raised if any others array objects are not Categorical objects

Notes

This operation can be expensive – slower than concatenating Strings.

contains(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element contains the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that contain substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

endswith(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element ends with the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that end with substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

classmethod from_codes(codes: arkouda.pdarrayclass.pdarray, categories: arkouda.strings.Strings, permutation=None, segments=None, **kwargs) Categorical[source]

Make a Categorical from codes and categories arrays. If codes and categories have already been pre-computed, this constructor saves time. If not, please use the normal constructor.

Parameters:
  • codes (pdarray, int64) – Category indices of each value

  • categories (Strings) – Unique category labels

  • permutation (pdarray, int64) – The permutation that groups the values in the same order as categories

  • segments (pdarray, int64) – When values are grouped, the starting offset of each group

Returns:

The Categorical object created from the input parameters

Return type:

Categorical

Raises:

TypeError – Raised if codes is not a pdarray of int64 objects or if categories is not a Strings object

classmethod from_return_msg(rep_msg) Categorical[source]

Create categorical from return message from server

Notes

This is currently only used when reading a Categorical from HDF5 files.

group() arkouda.pdarrayclass.pdarray[source]

Return the permutation that groups the array, placing equivalent categories together. All instances of the same category are guaranteed to lie in one contiguous block of the permuted array, but the blocks are not necessarily ordered.

Returns:

The permutation that groups the array by value

Return type:

pdarray

See also

GroupBy, unique

Notes

This method is faster than the corresponding Strings method. If the Categorical was created from a Strings object, then this function simply returns the cached permutation. Even if the Categorical was created using from_codes(), this function will be faster than Strings.group() because it sorts dense integer values, rather than 128-bit hash values.

hash() Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray][source]

Compute a 128-bit hash of each element of the Categorical.

Returns:

A tuple of two int64 pdarrays. The ith hash value is the concatenation of the ith values from each array.

Return type:

Tuple[pdarray,pdarray]

Notes

The implementation uses SipHash128, a fast and balanced hash function (used by Python for dictionaries and sets). For realistic numbers of strings (up to about 10**15), the probability of a collision between two 128-bit hash values is negligible.

in1d(test: arkouda.strings.Strings | Categorical) arkouda.pdarrayclass.pdarray[source]

Test whether each element of the Categorical object is also present in the test Strings or Categorical object.

Returns a boolean array the same length as self that is True where an element of self is in test and False otherwise.

Parameters:

test (Union[Strings,Categorical]) – The values against which to test each value of ‘self`.

Returns:

The values self[in1d] are in the test Strings or Categorical object.

Return type:

pdarray, bool

Raises:

TypeError – Raised if test is not a Strings or Categorical object

Notes

in1d can be considered as an element-wise function version of the python keyword in, for 1-D sequences. in1d(a, b) is logically equivalent to ak.array([item in b for item in a]), but is much faster and scales to arbitrarily large a.

Examples

>>> strings = ak.array([f'String {i}' for i in range(0,5)])
>>> cat = ak.Categorical(strings)
>>> ak.in1d(cat,strings)
array([True, True, True, True, True])
>>> strings = ak.array([f'String {i}' for i in range(5,9)])
>>> catTwo = ak.Categorical(strings)
>>> ak.in1d(cat,catTwo)
array([False, False, False, False, False])
info() str[source]

Returns a JSON formatted string containing information about all components of self

Parameters:

None

Returns:

JSON string containing information about all components of self

Return type:

str

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

isna()[source]

Find where values are missing or null (as defined by self.NAvalue)

static parse_hdf_categoricals(d: Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings]) Tuple[List[str], Dict[str, Categorical]][source]

This function should be used in conjunction with the load_all function which reads hdf5 files and reconstitutes Categorical objects. Categorical objects use a naming convention and HDF5 structure so they can be identified and constructed for the user.

In general you should not call this method directly

Parameters:

d (Dictionary of String to either Pdarray or Strings object)

Returns:

  • 2-Tuple of List of strings containing key names which should be removed and Dictionary of

  • base name to Categorical object

pretty_print_info() None[source]

Prints information about all components of self in a human readable format

Parameters:

None

Return type:

None

register(user_defined_name: str) Categorical[source]

Register this Categorical object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the Categorical is to be registered under, this will be the root name for underlying components

Returns:

The same Categorical which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Categoricals with the same name.

Return type:

Categorical

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the Categorical with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

reset_categories() Categorical[source]

Recompute the category labels, discarding any unused labels. This method is often useful after slicing or indexing a Categorical array, when the resulting array only contains a subset of the original categories. In this case, eliminating unused categories can speed up other operations.

Returns:

A Categorical object generated from the current instance

Return type:

Categorical

save(prefix_path: str, dataset: str = 'categorical_array', file_format: str = 'HDF5', mode: str = 'truncate', file_type: str = 'distribute', compression: str | None = None) str[source]

DEPRECATED Save the Categorical object to HDF5 or Parquet. The result is a collection of HDF5/Parquet files, one file per locale of the arkouda server, where each filename starts with prefix_path and dataset. Each locale saves its chunk of the Strings array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in HDF5 files (must not already exist) :type dataset: str :param file_format: The format to save the file to. :type file_format: str {‘HDF5 | ‘Parquet’} :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, create a new Categorical dataset within existing files.

Parameters:
  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

  • compression (str (Optional)) – {None | ‘snappy’ | ‘gzip’ | ‘brotli’ | ‘zstd’ | ‘lz4’} The compression type to use when writing. This is only supported for Parquet files and will not be used with HDF5.

Return type:

String message indicating result of save operation

Raises:
  • ValueError – Raised if the lengths of columns and values differ, or the mode is neither ‘truncate’ nor ‘append’

  • TypeError – Raised if prefix_path, dataset, or mode is not a str

Notes

Important implementation notes: (1) Strings state is saved as two datasets within an hdf5 group: one for the string characters and one for the segments corresponding to the start of each string, (2) the hdf5 group is named via the dataset parameter.

See also

-, -

set_categories(new_categories, NAvalue=None)[source]

Set categories to user-defined values.

Parameters:
  • new_categories (Strings) – The array of new categories to use. Must be unique.

  • NAvalue (str scalar) – The value to use to represent missing/null data

Returns:

A new Categorical with the user-defined categories. Old values present in new categories will appear unchanged. Old values not present will be assigned the NA value.

Return type:

Categorical

sort()[source]
classmethod standardize_categories(arrays, NAvalue='N/A')[source]

Standardize an array of Categoricals so that they share the same categories.

Parameters:
  • arrays (sequence of Categoricals) – The Categoricals to standardize

  • NAvalue (str scalar) – The value to use to represent missing/null data

Returns:

A list of the original Categoricals remapped to the shared categories.

Return type:

List of Categoricals

startswith(substr: bytes | arkouda.dtypes.str_scalars, regex: bool = False) arkouda.pdarrayclass.pdarray[source]

Check whether each element starts with the given substring.

Parameters:
  • substr (Union[bytes, str_scalars]) – The substring to search for

  • regex (bool) – Indicates whether substr is a regular expression Note: only handles regular expressions supported by re2 (does not support lookaheads/lookbehinds)

Returns:

True for elements that start with substr, False otherwise

Return type:

pdarray, bool

Raises:
  • TypeError – Raised if the substr parameter is not bytes or str_scalars

  • ValueError – Rasied if substr is not a valid regex

  • RuntimeError – Raised if there is a server-side error thrown

Notes

This method can be significantly faster than the corresponding method on Strings objects, because it searches the unique category labels instead of the full array.

to_hdf(prefix_path, dataset='categorical_array', mode='truncate', file_type='distribute')[source]

Save the Categorical to HDF5. The result is a collection of HDF5 files, one file per locale of the arkouda server, where each filename starts with prefix_path.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files will share

  • dataset (str) – Name prefix for saved data within the HDF5 file

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, add data as a new column to existing files.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale.

Return type:

None

See also

load

to_list() List[source]

Convert the Categorical to a list, transferring data from the arkouda server to Python. This conversion discards category information and produces a list of strings. If the arrays exceeds a built-in size limit, a RuntimeError is raised.

Returns:

A list of strings corresponding to the values in this Categorical

Return type:

list

Notes

The number of bytes in the Categorical cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but proceed with caution.

to_ndarray() numpy.ndarray[source]

Convert the array to a np.ndarray, transferring array data from the arkouda server to Python. This conversion discards category information and produces an ndarray of strings. If the arrays exceeds a built-in size limit, a RuntimeError is raised.

Returns:

A numpy ndarray of strings corresponding to the values in this array

Return type:

np.ndarray

Notes

The number of bytes in the array cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but proceed with caution.

to_parquet(prefix_path: str, dataset: str = 'categorical_array', mode: str = 'truncate', compression: str | None = None) str[source]

This functionality is currently not supported and will also raise a RuntimeError. Support is in development. Save the Categorical to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in HDF5 files (must not already exist)

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, create a new Categorical dataset within existing files.

  • compression (str (Optional)) – Default None Provide the compression type to use when writing the file. Supported values: snappy, gzip, brotli, zstd, lz4

Return type:

String message indicating result of save operation

Raises:

RuntimeError – On run due to compatability issues of Categorical with Parquet.

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - ‘append’ write mode is supported, but is not efficient. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

See also

to_hdf

to_strings() List[source]

Convert the Categorical to Strings.

Returns:

A Strings object corresponding to the values in this Categorical.

Return type:

arkouda.strings.Strings

Examples

>>> from arkouda import ak
>>> ak.connect()
>>> a = ak.array(["a","b","c"])
>>> a
>>> c = ak.Categorical(a)
>>>  c.to_strings()
c.to_strings()
>>> isinstance(c.to_strings(), ak.Strings)
True
transfer(hostname: str, port: arkouda.dtypes.int_scalars)[source]

Sends a Categorical object to a different Arkouda server

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the Categorical is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Return type:

A message indicating a complete transfer

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unique() Categorical[source]
unregister() None[source]

Unregister this Categorical object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

static unregister_categorical_by_name(user_defined_name: str) None[source]

Function to unregister Categorical object by name which was registered with the arkouda server via register()

Parameters:

user_defined_name (str) – Name under which the Categorical object was registered

Raises:
  • TypeError – if user_defined_name is not a string

  • RegistrationError – if there is an issue attempting to unregister any underlying components

update_hdf(prefix_path, dataset='categorical_array', repack=True)[source]

Overwrite the dataset with the name provided with this Categorical object. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the Categorical

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

  • Because HDF5 deletes do not release memory, the repack option allows for automatic creation of a file without the inaccessible data.

arkouda.DTypeObjects
arkouda.DTypes
class arkouda.DataFrame(initialdata=None, index=None, columns=None)[source]

Bases: collections.UserDict

A DataFrame structure based on arkouda arrays.

Parameters:
  • initialdata (List or dictionary of lists, tuples, or pdarrays) – Each list/dictionary entry corresponds to one column of the data and should be a homogenous type. Different columns may have different types. If using a dictionary, keys should be strings.

  • index (Index, pdarray, or Strings) – Index for the resulting frame. Defaults to an integer range.

  • columns (List, tuple, pdarray, or Strings) – Column labels to use if the data does not include them. Elements must be strings. Defaults to an stringified integer range.

Examples

Create an empty DataFrame and add a column of data:

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame()
>>> df['a'] = ak.array([1,2,3])
>>> display(df)

a

0

1

1

2

2

3

Create a new DataFrame using a dictionary of data:

>>> userName = ak.array(['Alice', 'Bob', 'Alice', 'Carol', 'Bob', 'Alice'])
>>> userID = ak.array([111, 222, 111, 333, 222, 111])
>>> item = ak.array([0, 0, 1, 1, 2, 0])
>>> day = ak.array([5, 5, 6, 5, 6, 6])
>>> amount = ak.array([0.5, 0.6, 1.1, 1.2, 4.3, 0.6])
>>> df = ak.DataFrame({'userName': userName, 'userID': userID,
>>>            'item': item, 'day': day, 'amount': amount})
>>> display(df)

userName

userID

item

day

amount

0

Alice

111

0

5

0.5

1

Bob

222

0

5

0.6

2

Alice

111

1

6

1.1

3

Carol

333

1

5

1.2

4

Bob

222

2

6

4.3

5

Alice

111

0

6

0.6

Indexing works slightly differently than with pandas:

>>> df[0]

keys

values

userName

Alice

userID

111

item

0

day

5

amount

0.5

>>> df['userID']
array([111, 222, 111, 333, 222, 111])
>>> df['userName']
array(['Alice', 'Bob', 'Alice', 'Carol', 'Bob', 'Alice'])
>>> df[ak.array([1,3,5])]

userName

userID

item

day

amount

0

Bob

222

0

5

0.6

1

Carol

333

1

5

1.2

2

Alice

111

0

6

0.6

Compute the stride:

>>> df[1:5:1]

userName

userID

item

day

amount

0

Bob

222

0

5

0.6

1

Alice

111

1

6

1.1

2

Carol

333

1

5

1.2

3

Bob

222

2

6

4.3

>>> df[ak.array([1,2,3])]

userName

userID

item

day

amount

0

Bob

222

0

5

0.6

1

Alice

111

1

6

1.1

2

Carol

333

1

5

1.2

>>> df[['userID', 'day']]

userID

day

0

111

5

1

222

5

2

111

6

3

333

5

4

222

6

5

111

6

property columns

An Index where the values are the column names of the dataframe.

Returns:

The values of the index are the column names of the dataframe.

Return type:

arkouda.index.Index

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df

col1

col2

0

1

3

1

2

4

>>> df.columns
Index(array(['col1', 'col2']), dtype='<U0')
property dtypes

The dtypes of the dataframe.

Returns:

dtypes – The dtypes of the dataframe.

Return type:

arkouda.row.Row

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': ["a", "b"]})
>>> df

col1

col2

0

1

a

1

2

b

>>> df.dtypes

keys

values

col1

int64

col2

str

property empty

Whether the dataframe is empty.

Returns:

True if the dataframe is empty, otherwise False.

Return type:

bool

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({})
>>> df
 0 rows x 0 columns
>>> df.empty
True
property index

The index of the dataframe.

Returns:

The index of the dataframe.

Return type:

arkouda.index.Index or arkouda.index.MultiIndex

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df

col1

col2

0

1

3

1

2

4

>>> df.index
Index(array([0 1]), dtype='int64')
property info

Returns a summary string of this dataframe.

Returns:

A summary string of this dataframe.

Return type:

str

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': ["a", "b"]})
>>> df

col1

col2

0

1

a

1

2

b

>>> df.info
"DataFrame(['col1', 'col2'], 2 rows, 20 B)"
property shape

The shape of the dataframe.

Returns:

Tuple of array dimensions.

Return type:

tuple of int

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df

col1

col2

0

1

4

1

2

5

2

3

6

>>> df.shape
(3, 2)
property size

Returns the number of bytes on the arkouda server.

Returns:

The number of bytes on the arkouda server.

Return type:

int

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df

col1

col2

0

1

4

1

2

5

2

3

6

>>> df.size
6
objType = 'DataFrame'
GroupBy(keys, use_series=False, as_index=True, dropna=True)[source]

Group the dataframe by a column or a list of columns.

Parameters:
  • keys (str or list of str) – An (ordered) list of column names or a single string to group by.

  • use_series (bool, default=False) – If True, returns an arkouda.dataframe.GroupBy object. Otherwise an arkouda.groupbyclass.GroupBy object.

  • as_index (bool, default=True) – If True, groupby columns will be set as index otherwise, the groupby columns will be treated as DataFrame columns.

  • dropna (bool, default=True) – If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Returns:

If use_series = True, returns an arkouda.dataframe.GroupBy object. Otherwise returns an arkouda.groupbyclass.GroupBy object.

Return type:

arkouda.dataframe.GroupBy or arkouda.groupbyclass.GroupBy

See also

arkouda.GroupBy

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1.0, 1.0, 2.0, np.nan], 'col2': [4, 5, 6, 7]})
>>> df

col1

col2

0

1

4

1

1

5

2

2

6

3

nan

7

>>> df.GroupBy("col1")
<arkouda.groupbyclass.GroupBy at 0x7f2cf23e10c0>
>>> df.GroupBy("col1").size()
(array([1.00000000000000000 2.00000000000000000]), array([2 1]))
>>> df.GroupBy("col1",use_series=True)
col1
1.0    2
2.0    1
dtype: int64
>>> df.GroupBy("col1",use_series=True, as_index = False).size()

col1

size

0

1

2

1

2

1

all(axis=0) arkouda.series.Series | bool[source]

Return whether all elements are True, potentially over an axis.

Returns True unless there at least one element along a Dataframe axis that is False.

Currently, will ignore any columns that are not type bool. This is equivalent to the pandas option bool_only=True.

Parameters:

axis ({0 or ‘index’, 1 or ‘columns’, None}, default = 0) –

Indicate which axis or axes should be reduced.

0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.

1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.

None : reduce all axes, return a scalar.

Return type:

arkouda.series.Series or bool

Raises:

ValueError – Raised if axis does not have a value in {0 or ‘index’, 1 or ‘columns’, None}.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"A":[True,True,True,False],"B":[True,True,True,False],
...          "C":[True,False,True,False],"D":[True,True,True,True]})

A

B

C

D

0

True

True

True

True

1

True

True

False

True

2

True

True

True

True

3

False

False

False

True

>>> df.all(axis=0)
A    False
B    False
C    False
D     True
dtype: bool
>>> df.all(axis=1)
0     True
1    False
2     True
3    False
dtype: bool
>>> df.all(axis=None)
False
any(axis=0) arkouda.series.Series | bool[source]

Return whether any element is True, potentially over an axis.

Returns False unless there is at least one element along a Dataframe axis that is True.

Currently, will ignore any columns that are not type bool. This is equivalent to the pandas option bool_only=True.

Parameters:

axis ({0 or ‘index’, 1 or ‘columns’, None}, default = 0) –

Indicate which axis or axes should be reduced.

0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.

1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.

None : reduce all axes, return a scalar.

Return type:

arkouda.series.Series or bool

Raises:

ValueError – Raised if axis does not have a value in {0 or ‘index’, 1 or ‘columns’, None}.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"A":[True,True,True,False],"B":[True,True,True,False],
...          "C":[True,False,True,False],"D":[False,False,False,False]})

A

B

C

D

0

True

True

True

False

1

True

True

False

False

2

True

True

True

False

3

False

False

False

False

>>> df.any(axis=0)
A     True
B     True
C     True
D    False
dtype: bool
>>> df.any(axis=1)
0     True
1     True
2     True
3    False
dtype: bool
>>> df.any(axis=None)
True
append(other, ordered=True)[source]

Concatenate data from ‘other’ onto the end of this DataFrame, in place.

Explicitly, use the arkouda concatenate function to append the data from each column in other to the end of self. This operation is done in place, in the sense that the underlying pdarrays are updated from the result of the arkouda concatenate function, rather than returning a new DataFrame object containing the result.

Parameters:
  • other (DataFrame) – The DataFrame object whose data will be appended to this DataFrame.

  • ordered (bool, default=True) – If False, allow rows to be interleaved for better performance (but data within a row remains together). By default, append all rows to the end, in input order.

Returns:

Appending occurs in-place, but result is returned for compatibility.

Return type:

self

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df1 = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})

col1

col2

0

1

3

1

2

4

>>> df2 = ak.DataFrame({'col1': [3], 'col2': [5]})

col1

col2

0

3

5

>>> df1.append(df2)
>>> df1

col1

col2

0

1

3

1

2

4

2

3

5

apply_permutation(perm)[source]

Apply a permutation to an entire DataFrame. The operation is done in place and the original DataFrame will be modified.

This may be useful if you want to unsort an DataFrame, or even to apply an arbitrary permutation such as the inverse of a sorting permutation.

Parameters:

perm (pdarray) – A permutation array. Should be the same size as the data arrays, and should consist of the integers [0,size-1] in some order. Very minimal testing is done to ensure this is a permutation.

Return type:

None

See also

sort

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})

col1

col2

0

1

4

1

2

5

2

3

6

>>> perm_arry = ak.array([0, 2, 1])
>>> df.apply_permutation(perm_arry)
>>> display(df)

col1

col2

0

1

4

1

3

6

2

2

5

argsort(key, ascending=True)[source]

Return the permutation that sorts the dataframe by key.

Parameters:
  • key (str) – The key to sort on.

  • ascending (bool, default = True) – If true, sort the key in ascending order. Otherwise, sort the key in descending order.

Returns:

The permutation array that sorts the data on key.

Return type:

arkouda.pdarrayclass.pdarray

See also

coargsort

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1.1, 3.1, 2.1], 'col2': [6, 5, 4]})
>>> display(df)

col1

col2

0

1.1

6

1

3.1

5

2

2.1

4

>>> df.argsort('col1')
array([0 2 1])
>>> sorted_df1 = df[df.argsort('col1')]
>>> display(sorted_df1)

col1

col2

0

1.1

6

1

2.1

4

2

3.1

5

>>> df.argsort('col2')
array([2 1 0])
>>> sorted_df2 = df[df.argsort('col2')]
>>> display(sorted_df2)

col1

col2

0

2.1

4

1

3.1

5

2

1.1

6

static attach(user_defined_name: str) DataFrame[source]

Function to return a DataFrame object attached to the registered name in the arkouda server which was registered using register().

Parameters:

user_defined_name (str) – user defined name which DataFrame object was registered under.

Returns:

The DataFrame object created by re-attaching to the corresponding server components.

Return type:

arkouda.dataframe.DataFrame

Raises:

RegistrationError – if user_defined_name is not registered

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
coargsort(keys, ascending=True)[source]

Return the permutation that sorts the dataframe by keys.

Note: Sorting using Strings may not yield correct sort order.

Parameters:

keys (list of str) – The keys to sort on.

Returns:

The permutation array that sorts the data on keys.

Return type:

arkouda.pdarrayclass.pdarray

Example

>>> df = ak.DataFrame({'col1': [2, 2, 1], 'col2': [3, 4, 3], 'col3':[5, 6, 7]})
>>> display(df)

col1

col2

col3

0

2

3

5

1

2

4

6

2

1

3

7

>>> df.coargsort(['col1', 'col2'])
array([2 0 1])
>>>
classmethod concat(items, ordered=True)[source]

Essentially an append, but different formatting.

copy(deep=True)[source]

Make a copy of this object’s data.

When deep = True (default), a new object will be created with a copy of the calling object’s data. Modifications to the data of the copy will not be reflected in the original object.

When deep = False a new object will be created without copying the calling object’s data. Any changes to the data of the original object will be reflected in the shallow copy, and vice versa.

Parameters:

deep (bool, default=True) – When True, return a deep copy. Otherwise, return a shallow copy.

Returns:

A deep or shallow copy according to caller specification.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> display(df)

col1

col2

0

1

3

1

2

4

>>> df_deep = df.copy(deep=True)
>>> df_deep['col1'] +=1
>>> display(df)

col1

col2

0

1

3

1

2

4

>>> df_shallow = df.copy(deep=False)
>>> df_shallow['col1'] +=1
>>> display(df)

col1

col2

0

2

3

1

3

4

corr() DataFrame[source]

Return new DataFrame with pairwise correlation of columns.

Returns:

Arkouda DataFrame containing correlation matrix of all columns.

Return type:

arkouda.dataframe.DataFrame

Raises:

RuntimeError – Raised if there’s a server-side error thrown.

See also

pdarray.corr

Notes

Generates the correlation matrix using Pearson R for all columns.

Attempts to convert to numeric values where possible for inclusion in the matrix.

Example

>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [-1, -2]})
>>> display(df)

col1

col2

0

1

-1

1

2

-2

>>> corr = df.corr()

col1

col2

col1

1

-1

col2

-1

1

count(axis: int | str = 0, numeric_only=False) arkouda.series.Series[source]

Count non-NA cells for each column or row.

The values np.NaN are considered NA.

Parameters:
  • axis ({0 or 'index', 1 or 'columns'}, default 0) – If 0 or ‘index’ counts are generated for each column. If 1 or ‘columns’ counts are generated for each row.

  • numeric_only (bool = False) – Include only float, int or boolean data.

Returns:

For each column/row the number of non-NA/null entries.

Return type:

arkouda.series.Series

Raises:

ValueError – Raised if axis is not 0, 1, ‘index’, or ‘columns’.

See also

GroupBy.count

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame({'col_A': ak.array([7, np.nan]), 'col_B':ak.array([1, 9])})
>>> display(df)

col_A

col_B

0

7

1

1

nan

9

>>> df.count()
col_A    1
col_B    2
dtype: int64
>>> df = ak.DataFrame({'col_A': ak.array(["a","b","c"]), 'col_B':ak.array([1, np.nan, np.nan])})
>>> display(df)

col_A

col_B

0

a

1

1

b

nan

2

c

nan

>>> df.count()
col_A    3
col_B    1
dtype: int64
>>> df.count(numeric_only=True)
col_B    1
dtype: int64
>>> df.count(axis=1)
0    2
1    1
2    1
dtype: int64
drop(keys: str | int | List[str | int], axis: str | int = 0, inplace: bool = False) None | DataFrame[source]

Drop column/s or row/s from the dataframe.

Parameters:
  • keys (str, int or list) – The labels to be dropped on the given axis.

  • axis (int or str) – The axis on which to drop from. 0/’index’ - drop rows, 1/’columns’ - drop columns.

  • inplace (bool, default=False) – When True, perform the operation on the calling object. When False, return a new object.

Returns:

DateFrame when inplace=False; None when inplace=True

Return type:

arkouda.dataframe.DataFrame or None

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> display(df)

col1

col2

0

1

3

1

2

4

Drop column

>>> df.drop('col1', axis = 1)

col2

0

3

1

4

Drop row

>>> df.drop(0, axis = 0)

col1

col2

0

2

4

drop_duplicates(subset=None, keep='first')[source]

Drops duplcated rows and returns resulting DataFrame.

If a subset of the columns are provided then only one instance of each duplicated row will be returned (keep determines which row).

Parameters:
  • subset (Iterable) – Iterable of column names to use to dedupe.

  • keep ({'first', 'last'}, default='first') – Determines which duplicates (if any) to keep.

Returns:

DataFrame with duplicates removed.

Return type:

arkouda.dataframe.DataFrame

Example

>>> df = ak.DataFrame({'col1': [1, 2, 2, 3], 'col2': [4, 5, 5, 6]})
>>> display(df)

col1

col2

0

1

4

1

2

5

2

2

5

3

3

6

>>> df.drop_duplicates()

col1

col2

0

1

4

1

2

5

2

3

6

dropna(axis: int | str = 0, how: str | None = None, thresh: int | None = None, ignore_index: bool = False) DataFrame[source]

Remove missing values.

Parameters:
  • axis ({0 or 'index', 1 or 'columns'}, default = 0) –

    Determine if rows or columns which contain missing values are removed.

    0, or ‘index’: Drop rows which contain missing values.

    1, or ‘columns’: Drop columns which contain missing value.

    Only a single axis is allowed.

  • how ({'any', 'all'}, default='any') –

    Determine if row or column is removed from DataFrame, when we have at least one NA or all NA.

    ’any’: If any NA values are present, drop that row or column.

    ’all’: If all values are NA, drop that row or column.

  • thresh (int, optional) – Require that many non - NA values.Cannot be combined with how.

  • ignore_index (bool, default False) – If True, the resulting axis will be labeled 0, 1, …, n - 1.

Returns:

DataFrame with NA entries dropped from it.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame(
    {
        "A": [True, True, True, True],
        "B": [1, np.nan, 2, np.nan],
        "C": [1, 2, 3, np.nan],
        "D": [False, False, False, False],
        "E": [1, 2, 3, 4],
        "F": ["a", "b", "c", "d"],
        "G": [1, 2, 3, 4],
    }
   )
>>> display(df)

A

B

C

D

E

F

G

0

True

1

1

False

1

a

1

1

True

nan

2

False

2

b

2

2

True

2

3

False

3

c

3

3

True

nan

nan

False

4

d

4

>>> df.dropna()

A

B

C

D

E

F

G

0

True

1

1

False

1

a

1

1

True

2

3

False

3

c

3

>>> df.dropna(axis=1)

A

D

E

F

G

0

True

False

1

a

1

1

True

False

2

b

2

2

True

False

3

c

3

3

True

False

4

d

4

>>> df.dropna(axis=1, thresh=3)

A

C

D

E

F

G

0

True

1

False

1

a

1

1

True

2

False

2

b

2

2

True

3

False

3

c

3

3

True

nan

False

4

d

4

>>> df.dropna(axis=1, how="all")

A

B

C

D

E

F

G

0

True

1

1

False

1

a

1

1

True

nan

2

False

2

b

2

2

True

2

3

False

3

c

3

3

True

nan

nan

False

4

d

4

filter_by_range(keys, low=1, high=None)[source]

Find all rows where the value count of the items in a given set of columns (keys) is within the range [low, high].

To filter by a specific value, set low == high.

Parameters:
  • keys (str or list of str) – The names of the columns to group by.

  • low (int, default=1) – The lowest value count.

  • high (int, default=None) – The highest value count, default to unlimited.

Returns:

An array of boolean values for qualified rows in this DataFrame.

Return type:

arkouda.pdarrayclass.pdarray

Example

>>> df = ak.DataFrame({'col1': [1, 2, 2, 2, 3, 3], 'col2': [4, 5, 6, 7, 8, 9]})
>>> display(df)

col1

col2

0

1

4

1

2

5

2

2

6

3

2

7

4

3

8

5

3

9

>>> df.filter_by_range("col1", low=1, high=2)
array([True False False False True True])
>>> filtered_df = df[df.filter_by_range("col1", low=1, high=2)]
>>> display(filtered_df)

col1

col2

0

1

4

1

3

8

2

3

9

classmethod from_pandas(pd_df)[source]

Copy the data from a pandas DataFrame into a new arkouda.dataframe.DataFrame.

Parameters:

pd_df (pandas.DataFrame) – A pandas DataFrame to convert.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import pandas as pd
>>> pd_df = pd.DataFrame({"A":[1,2],"B":[3,4]})
>>> type(pd_df)
pandas.core.frame.DataFrame
>>> display(pd_df)

A

B

0

1

3

1

2

4

>>> ak_df = DataFrame.from_pandas(pd_df)
>>> type(ak_df)
arkouda.dataframe.DataFrame
>>> display(ak_df)

A

B

0

1

3

1

2

4

classmethod from_return_msg(rep_msg)[source]

Creates a DataFrame object from an arkouda server response message.

Parameters:

rep_msg (string) – Server response message used to create a DataFrame.

Return type:

arkouda.dataframe.DataFrame

groupby(keys, use_series=True, as_index=True, dropna=True)[source]

Group the dataframe by a column or a list of columns. Alias for GroupBy.

Parameters:
  • keys (str or list of str) – An (ordered) list of column names or a single string to group by.

  • use_series (bool, default=True) – If True, returns an arkouda.dataframe.GroupBy object. Otherwise an arkouda.groupbyclass.GroupBy object.

  • as_index (bool, default=True) – If True, groupby columns will be set as index otherwise, the groupby columns will be treated as DataFrame columns.

  • dropna (bool, default=True) – If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Returns:

If use_series = True, returns an arkouda.dataframe.GroupBy object. Otherwise returns an arkouda.groupbyclass.GroupBy object.

Return type:

arkouda.dataframe.GroupBy or arkouda.groupbyclass.GroupBy

See also

arkouda.GroupBy

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1.0, 1.0, 2.0, np.nan], 'col2': [4, 5, 6, 7]})
>>> df

col1

col2

0

1

4

1

1

5

2

2

6

3

nan

7

>>> df.GroupBy("col1")
<arkouda.groupbyclass.GroupBy at 0x7f2cf23e10c0>
>>> df.GroupBy("col1").size()
(array([1.00000000000000000 2.00000000000000000]), array([2 1]))
>>> df.GroupBy("col1",use_series=True)
col1
1.0    2
2.0    1
dtype: int64
>>> df.GroupBy("col1",use_series=True, as_index = False).size()

col1

size

0

1

2

1

2

1

head(n=5)[source]

Return the first n rows.

This function returns the first n rows of the the dataframe. It is useful for quickly verifying data, for example, after sorting or appending rows.

Parameters:

n (int, default = 5) – Number of rows to select.

Returns:

The first n rows of the DataFrame.

Return type:

arkouda.dataframe.DataFrame

See also

tail

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': ak.arange(10), 'col2': -1 * ak.arange(10)})
>>> display(df)

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

5

5

-5

6

6

-6

7

7

-7

8

8

-8

9

9

-9

>>> df.head()

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

>>> df.head(n=2)

col1

col2

0

0

0

1

1

-1

is_registered() bool[source]

Return True if the object is contained in the registry.

Returns:

Indicates if the object is contained in the registry.

Return type:

bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mismatch of registered components.

Notes

Objects registered with the server are immune to deletion until they are unregistered.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
isin(values: arkouda.pdarrayclass.pdarray | Dict | arkouda.series.Series | DataFrame) DataFrame[source]

Determine whether each element in the DataFrame is contained in values.

Parameters:

values (pdarray, dict, Series, or DataFrame) – The values to check for in DataFrame. Series can only have a single index.

Returns:

Arkouda DataFrame of booleans showing whether each element in the DataFrame is contained in values.

Return type:

arkouda.dataframe.DataFrame

See also

ak.Series.isin

Notes

  • Pandas supports values being an iterable type. In arkouda, we replace this with pdarray.

  • Pandas supports ~ operations. Currently, ak.DataFrame does not support this.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col_A': ak.array([7, 3]), 'col_B':ak.array([1, 9])})
>>> display(df)

col_A

col_B

0

7

1

1

3

9

When values is a pdarray, check every value in the DataFrame to determine if it exists in values.

>>> df.isin(ak.array([0, 1]))

col_A

col_B

0

0

1

1

0

0

When values is a dict, the values in the dict are passed to check the column indicated by the key.

>>> df.isin({'col_A': ak.array([0, 3])})

col_A

col_B

0

0

0

1

1

0

When values is a Series, each column is checked if values is present positionally. This means that for True to be returned, the indexes must be the same.

>>> i = ak.Index(ak.arange(2))
>>> s = ak.Series(data=[3, 9], index=i)
>>> df.isin(s)

col_A

col_B

0

0

0

1

0

1

When values is a DataFrame, the index and column must match. Note that 9 is not found because the column name does not match.

>>> other_df = ak.DataFrame({'col_A':ak.array([7, 3]), 'col_C':ak.array([0, 9])})
>>> df.isin(other_df)

col_A

col_B

0

1

0

1

1

0

isna() DataFrame[source]

Detect missing values.

Return a boolean same-sized object indicating if the values are NA. numpy.NaN values get mapped to True values. Everything else gets mapped to False values.

Returns:

Mask of bool values for each element in DataFrame that indicates whether an element is an NA value.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame({"A": [np.nan, 2, 2, 3], "B": [3, np.nan, 5, 6],
...          "C": [1, np.nan, 2, np.nan], "D":["a","b","c","d"]})
>>> display(df)

A

B

C

D

0

nan

3

1

a

1

2

nan

nan

b

2

2

5

2

c

3

3

6

nan

d

>>> df.isna()
       A      B      C      D
0   True  False  False  False
1  False   True   True  False
2  False  False  False  False
3  False  False   True  False (4 rows x 4 columns)
classmethod load(prefix_path, file_format='INFER')[source]

Load dataframe from file. file_format needed for consistency with other load functions.

Parameters:
  • prefix_path (str) – The prefix path for the data.

  • file_format (string, default = "INFER")

Returns:

A dataframe loaded from the prefix_path.

Return type:

arkouda.dataframe.DataFrame

Examples

To store data in <my_dir>/my_data_LOCALE0000, use “<my_dir>/my_data” as the prefix.

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf5_output','my_data')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A": ak.arange(5), "B": -1 * ak.arange(5)})
>>> df.save(my_path, file_type="distribute")
>>> df.load(my_path)

A

B

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

memory_usage(index=True, unit='B') arkouda.series.Series[source]

Return the memory usage of each column in bytes.

The memory usage can optionally include the contribution of the index.

Parameters:
  • index (bool, default True) – Specifies whether to include the memory usage of the DataFrame’s index in returned Series. If index=True, the memory usage of the index is the first item in the output.

  • unit (str, default = "B") – Unit to return. One of {‘B’, ‘KB’, ‘MB’, ‘GB’}.

Returns:

A Series whose index is the original column names and whose values is the memory usage of each column in bytes.

Return type:

Series

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> dtypes = [ak.int64, ak.float64,  ak.bool]
>>> data = dict([(str(t), ak.ones(5000, dtype=ak.int64).astype(t)) for t in dtypes])
>>> df = ak.DataFrame(data)
>>> display(df.head())

int64

float64

bool

0

1

1

True

1

1

1

True

2

1

1

True

3

1

1

True

4

1

1

True

>>> df.memory_usage()

0

Index

40000

int64

40000

float64

40000

bool

5000

>>> df.memory_usage(index=False)

0

int64

40000

float64

40000

bool

5000

>>> df.memory_usage(unit="KB")

0

Index

39.0625

int64

39.0625

float64

39.0625

bool

4.88281

To get the approximate total memory usage:

>>>  df.memory_usage(index=True).sum()
memory_usage_info(unit='GB')[source]

A formatted string representation of the size of this DataFrame.

Parameters:

unit (str, default = "GB") – Unit to return. One of {‘KB’, ‘MB’, ‘GB’}.

Returns:

A string representation of the number of bytes used by this DataFrame in [unit]s.

Return type:

str

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': ak.arange(1000), 'col2': ak.arange(1000)})
>>> df.memory_usage_info()
'0.00 GB'
>>> df.memory_usage_info(unit="KB")
'15 KB'
merge(right: DataFrame, on: str | List[str] | None = None, how: str = 'inner', left_suffix: str = '_x', right_suffix: str = '_y', convert_ints: bool = True, sort: bool = True) DataFrame[source]

Merge Arkouda DataFrames with a database-style join. The resulting dataframe contains rows from both DataFrames as specified by the merge condition (based on the “how” and “on” parameters).

Based on pandas merge functionality. https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html

Parameters:
  • right (DataFrame) – The Right DataFrame to be joined.

  • on (Optional[Union[str, List[str]]] = None) – The name or list of names of the DataFrame column(s) to join on. If on is None, this defaults to the intersection of the columns in both DataFrames.

  • how ({"inner", "left", "right}, default = "inner") – The merge condition. Must be “inner”, “left”, or “right”.

  • left_suffix (str, default = "_x") – A string indicating the suffix to add to columns from the left dataframe for overlapping column names in both left and right. Defaults to “_x”. Only used when how is “inner”.

  • right_suffix (str, default = "_y") – A string indicating the suffix to add to columns from the right dataframe for overlapping column names in both left and right. Defaults to “_y”. Only used when how is “inner”.

  • convert_ints (bool = True) – If True, convert columns with missing int values (due to the join) to float64. This is to match pandas. If False, do not convert the column dtypes. This has no effect when how = “inner”.

  • sort (bool = True) – If True, DataFrame is returned sorted by “on”. Otherwise, the DataFrame is not sorted.

Returns:

Joined Arkouda DataFrame.

Return type:

arkouda.dataframe.DataFrame

Note

Multiple column joins are only supported for integer columns.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> left_df = ak.DataFrame({'col1': ak.arange(5), 'col2': -1 * ak.arange(5)})
>>> display(left_df)

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

>>> right_df = ak.DataFrame({'col1': 2 * ak.arange(5), 'col2': 2 * ak.arange(5)})
>>> display(right_df)

col1

col2

0

0

0

1

2

2

2

4

4

3

6

6

4

8

8

>>> left_df.merge(right_df, on = "col1")

col1

col2_x

col2_y

0

0

0

0

1

2

-2

2

2

4

-4

4

>>> left_df.merge(right_df, on = "col1", how = "left")

col1

col2_y

col2_x

0

0

0

0

1

1

nan

-1

2

2

2

-2

3

3

nan

-3

4

4

4

-4

>>> left_df.merge(right_df, on = "col1", how = "right")

col1

col2_x

col2_y

0

0

0

0

1

2

-2

2

2

4

-4

4

3

6

nan

6

4

8

nan

8

>>> left_df.merge(right_df, on = "col1", how = "outer")

col1

col2_y

col2_x

0

0

0

0

1

1

nan

-1

2

2

2

-2

3

3

nan

-3

4

4

4

-4

5

6

6

nan

6

8

8

nan

notna() DataFrame[source]

Detect existing (non-missing) values.

Return a boolean same-sized object indicating if the values are not NA. numpy.NaN values get mapped to False values.

Returns:

Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame({"A": [np.nan, 2, 2, 3], "B": [3, np.nan, 5, 6],
...          "C": [1, np.nan, 2, np.nan], "D":["a","b","c","d"]})
>>> display(df)

A

B

C

D

0

nan

3

1

a

1

2

nan

nan

b

2

2

5

2

c

3

3

6

nan

d

>>> df.notna()
       A      B      C     D
0  False   True   True  True
1   True  False  False  True
2   True   True   True  True
3   True   True  False  True (4 rows x 4 columns)
classmethod read_csv(filename: str, col_delim: str = ',')[source]

Read the columns of a CSV file into an Arkouda DataFrame. If the file contains the appropriately formatted header, typed data will be returned. Otherwise, all data will be returned as a Strings objects.

Parameters:
  • filename (str) – Filename to read data from.

  • col_delim (str, default=",") – The delimiter for columns within the data.

Returns:

Arkouda DataFrame containing the columns from the CSV file.

Return type:

arkouda.dataframe.DataFrame

Raises:
  • ValueError – Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist.

  • RuntimeError – Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

  • TypeError – Raised if we receive an unknown arkouda_type returned from the server.

See also

to_csv

Notes

  • CSV format is not currently supported by load/load_all operations.

  • The column delimiter is expected to be the same for column names and data.

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (”\n”) at this time.

  • Unlike other file formats, CSV files store Strings as their UTF-8 format instead of storing

bytes as uint(8).

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'csv_output','my_data')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_csv(my_path)
>>> df2 = DataFrame.read_csv(my_path + "_LOCALE0000")
>>> display(df2)

A

B

0

1

3

1

2

4

register(user_defined_name: str) DataFrame[source]

Register this DataFrame object and underlying components with the Arkouda server.

Parameters:

user_defined_name (str) – User defined name the DataFrame is to be registered under. This will be the root name for underlying components.

Returns:

The same DataFrame which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different DataFrames with the same name.

Return type:

arkouda.dataframe.DataFrame

Raises:
  • TypeError – Raised if user_defined_name is not a str.

  • RegistrationError – If the server was unable to register the DataFrame with the user_defined_name.

Notes

Objects registered with the server are immune to deletion until they are unregistered.

Any changes made to a DataFrame object after registering with the server may not be reflected in attached copies.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
rename(mapper: Callable | Dict | None = None, index: Callable | Dict | None = None, column: Callable | Dict | None = None, axis: str | int = 0, inplace: bool = False) DataFrame | None[source]

Rename indexes or columns according to a mapping.

Parameters:
  • mapper (callable or dict-like, Optional) – Function or dictionary mapping existing values to new values. Nonexistent names will not raise an error. Uses the value of axis to determine if renaming column or index

  • column (callable or dict-like, Optional) – Function or dictionary mapping existing column names to new column names. Nonexistent names will not raise an error. When this is set, axis is ignored.

  • index (callable or dict-like, Optional) – Function or dictionary mapping existing index names to new index names. Nonexistent names will not raise an error. When this is set, axis is ignored.

  • axis (int or str, default=0) – Indicates which axis to perform the rename. 0/”index” - Indexes 1/”column” - Columns

  • inplace (bool, default=False) – When True, perform the operation on the calling object. When False, return a new object.

Returns:

DateFrame when inplace=False; None when inplace=True.

Return type:

arkouda.dataframe.DataFrame or None

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"A": ak.array([1, 2, 3]), "B": ak.array([4, 5, 6])})
>>> display(df)

A

B

0

1

4

1

2

5

2

3

6

Rename columns using a mapping:

>>> df.rename(column={'A':'a', 'B':'c'})

a

c

0

1

4

1

2

5

2

3

6

Rename indexes using a mapping:

>>> df.rename(index={0:99, 2:11})

A

B

0

1

4

1

2

5

2

3

6

Rename using an axis style parameter:

>>> df.rename(str.lower, axis='column')

a

b

0

1

4

1

2

5

2

3

6

reset_index(size: int | None = None, inplace: bool = False) None | DataFrame[source]

Set the index to an integer range.

Useful if this dataframe is the result of a slice operation from another dataframe, or if you have permuted the rows and no longer need to keep that ordering on the rows.

Parameters:
  • size (int, optional) – If size is passed, do not attempt to determine size based on existing column sizes. Assume caller handles consistency correctly.

  • inplace (bool, default=False) – When True, perform the operation on the calling object. When False, return a new object.

Returns:

DateFrame when inplace=False; None when inplace=True.

Return type:

arkouda.dataframe.DataFrame or None

Note

Pandas adds a column ‘index’ to indicate the original index. Arkouda does not currently support this behavior.

Example

>>> df = ak.DataFrame({"A": ak.array([1, 2, 3]), "B": ak.array([4, 5, 6])})
>>> display(df)

A

B

0

1

4

1

2

5

2

3

6

>>> perm_df = df[ak.array([0,2,1])]
>>> display(perm_df)

A

B

0

1

4

1

3

6

2

2

5

>>> perm_df.reset_index()

A

B

0

1

4

1

3

6

2

2

5

sample(n=5)[source]

Return a random sample of n rows.

Parameters:

n (int, default=5) – Number of rows to return.

Returns:

The sampled n rows of the DataFrame.

Return type:

arkouda.dataframe.DataFrame

Example

>>> df = ak.DataFrame({"A": ak.arange(5), "B": -1 * ak.arange(5)})
>>> display(df)

A

B

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

Random output of size 3:

>>> df.sample(n=3)

A

B

0

0

0

1

1

-1

2

4

-4

save(path, index=False, columns=None, file_format='HDF5', file_type='distribute', compression: str | None = None)[source]

DEPRECATED Save DataFrame to disk, preserving column names.

Parameters:
  • path (str) – File path to save data.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (list, default=None) – List of columns to include in the file. If None, writes out all columns.

  • file_format (str, default='HDF5') – ‘HDF5’ or ‘Parquet’. Defaults to ‘HDF5’

  • file_type (str, default=distribute) – “single” or “distribute” If single, will right a single file to locale 0.

  • compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Compression type. Only used for Parquet

Notes

This method saves one file per locale of the arkouda server. All files are prefixed by the path argument and suffixed by their locale number.

See also

to_parquet, to_hdf

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf5_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A": ak.arange(5), "B": -1 * ak.arange(5)})
>>> df.save(my_path + '/my_data', file_type="single")
>>> df.load(my_path + '/my_data')

A

B

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

sort_index(ascending=True)[source]

Sort the DataFrame by indexed columns.

Note: Fails on sort order of arkouda.strings.Strings columns when multiple columns being sorted.

Parameters:

ascending (bool, default = True) – Sort values in ascending (default) or descending order.

Example

>>> df = ak.DataFrame({'col1': [1.1, 3.1, 2.1], 'col2': [6, 5, 4]},
...          index = Index(ak.array([2,0,1]), name="idx"))
>>> display(df)

idx

col1

col2

0

1.1

6

1

3.1

5

2

2.1

4

>>> df.sort_index()

idx

col1

col2

0

3.1

5

1

2.1

4

2

1.1

6

sort_values(by=None, ascending=True)[source]

Sort the DataFrame by one or more columns.

If no column is specified, all columns are used.

Note: Fails on order of arkouda.strings.Strings columns when multiple columns being sorted.

Parameters:
  • by (str or list/tuple of str, default = None) – The name(s) of the column(s) to sort by.

  • ascending (bool, default = True) – Sort values in ascending (default) or descending order.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [2, 2, 1], 'col2': [3, 4, 3], 'col3':[5, 6, 7]})
>>> display(df)

col1

col2

col3

0

2

3

5

1

2

4

6

2

1

3

7

>>> df.sort_values()

col1

col2

col3

0

1

3

7

1

2

3

5

2

2

4

6

>>> df.sort_values("col3")

col1

col2

col3

0

1

3

7

1

2

3

5

2

2

4

6

tail(n=5)[source]

Return the last n rows.

This function returns the last n rows for the dataframe. It is useful for quickly testing if your object has the right type of data in it.

Parameters:

n (int, default=5) – Number of rows to select.

Returns:

The last n rows of the DataFrame.

Return type:

arkouda.dataframe.DataFrame

See also

arkouda.dataframe.head

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': ak.arange(10), 'col2': -1 * ak.arange(10)})
>>> display(df)

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

5

5

-5

6

6

-6

7

7

-7

8

8

-8

9

9

-9

>>> df.tail()

col1

col2

0

5

-5

1

6

-6

2

7

-7

3

8

-8

4

9

-9

>>> df.tail(n=2)

col1

col2

0

8

-8

1

9

-9

to_csv(path: str, index: bool = False, columns: List[str] | None = None, col_delim: str = ',', overwrite: bool = False)[source]

Writes DataFrame to CSV file(s). File will contain a column for each column in the DataFrame. All CSV Files written by Arkouda include a header denoting data types of the columns. Unlike other file formats, CSV files store Strings as their UTF-8 format instead of storing bytes as uint(8).

Parameters:
  • path (str) – The filename prefix to be used for saving files. Files will have _LOCALE#### appended when they are written to disk.

  • index (bool, default=False) – If True, the index of the DataFrame will be written to the file as a column.

  • columns (list of str (Optional)) – Column names to assign when writing data.

  • col_delim (str, default=",") – Value to be used to separate columns within the file. Please be sure that the value used DOES NOT appear in your dataset.

  • overwrite (bool, default=False) – If True, any existing files matching your provided prefix_path will be overwritten. If False, an error will be returned if existing files are found.

Return type:

None

Raises:
  • ValueError – Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist.

  • RuntimeError – Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

  • TypeError – Raised if we receive an unknown arkouda_type returned from the server.

Notes

  • CSV format is not currently supported by load/load_all operations.

  • The column delimiter is expected to be the same for column names and data.

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (”\n”) at this time.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'csv_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_csv(my_path + "/my_data")
>>> df2 = DataFrame.read_csv(my_path + "/my_data" + "_LOCALE0000")
>>> display(df2)

A

B

0

1

3

1

2

4

to_hdf(path, index=False, columns=None, file_type='distribute')[source]

Save DataFrame to disk as hdf5, preserving column names.

Parameters:
  • path (str) – File path to save data.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (List, default = None) – List of columns to include in the file. If None, writes out all columns.

  • file_type (str (single | distribute), default=distribute) – Whether to save to a single file or distribute across Locales.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray.

Notes

This method saves one file per locale of the arkouda server. All files are prefixed by the path argument and suffixed by their locale number.

See also

to_parquet, load

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_hdf(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

A

B

0

1

3

1

2

4

to_markdown(mode='wt', index=True, tablefmt='grid', storage_options=None, **kwargs)[source]

Print DataFrame in Markdown-friendly format.

Parameters:
  • mode (str, optional) – Mode in which file is opened, “wt” by default.

  • index (bool, optional, default True) – Add index (row) labels.

  • tablefmt (str = "grid") – Table format to call from tablulate: https://pypi.org/project/tabulate/

  • storage_options (dict, optional) – Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. An error will be raised if providing this argument with a non-fsspec URL. See the fsspec and backend storage implementation docs for the set of allowed keys and values.

  • **kwargs – These parameters will be passed to tabulate.

Note

This function should only be called on small DataFrames as it calls pandas.DataFrame.to_markdown: https://pandas.pydata.org/pandas-docs/version/1.2.4/reference/api/pandas.DataFrame.to_markdown.html

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"animal_1": ["elk", "pig"], "animal_2": ["dog", "quetzal"]})
>>> print(df.to_markdown())
+----+------------+------------+
|    | animal_1   | animal_2   |
+====+============+============+
|  0 | elk        | dog        |
+----+------------+------------+
|  1 | pig        | quetzal    |
+----+------------+------------+

Suppress the index:

>>> print(df.to_markdown(index = False))
+------------+------------+
| animal_1   | animal_2   |
+============+============+
| elk        | dog        |
+------------+------------+
| pig        | quetzal    |
+------------+------------+
to_pandas(datalimit=maxTransferBytes, retain_index=False)[source]

Send this DataFrame to a pandas DataFrame.

Parameters:
  • datalimit (int, default=arkouda.client.maxTransferBytes) – The maximum number size, in megabytes to transfer. The requested DataFrame will be converted to a pandas DataFrame only if the estimated size of the DataFrame does not exceed this value.

  • retain_index (bool, default=False) – Normally, to_pandas() creates a new range index object. If you want to keep the index column, set this to True.

Returns:

The result of converting this DataFrame to a pandas DataFrame.

Return type:

pandas.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> ak_df = ak.DataFrame({"A": ak.arange(2), "B": -1 * ak.arange(2)})
>>> type(ak_df)
arkouda.dataframe.DataFrame
>>> display(ak_df)

A

B

0

0

0

1

1

-1

>>> import pandas as pd
>>> pd_df = ak_df.to_pandas()
>>> type(pd_df)
pandas.core.frame.DataFrame
>>> display(pd_df)

A

B

0

0

0

1

1

-1

to_parquet(path, index=False, columns=None, compression: str | None = None, convert_categoricals: bool = False)[source]

Save DataFrame to disk as parquet, preserving column names.

Parameters:
  • path (str) – File path to save data.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (list) – List of columns to include in the file. If None, writes out all columns.

  • compression (str (Optional), default=None) – Provide the compression type to use when writing the file. Supported values: snappy, gzip, brotli, zstd, lz4

  • convert_categoricals (bool, default=False) – Parquet requires all columns to be the same size and Categoricals don’t satisfy that requirement. If set, write the equivalent Strings in place of any Categorical columns.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

This method saves one file per locale of the arkouda server. All files are prefixed by the path argument and suffixed by their locale number.

See also

to_hdf, load

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'parquet_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_parquet(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

B

A

0

3

1

1

4

2

transfer(hostname, port)[source]

Sends a DataFrame to a different Arkouda server.

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the DataFrame is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Returns:

A message indicating a complete transfer.

Return type:

str

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unregister()[source]

Unregister this DataFrame object in the arkouda server which was previously registered using register() and/or attached to using attach().

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister.

Notes

Objects registered with the server are immune to deletion until they are unregistered.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
static unregister_dataframe_by_name(user_defined_name: str) str[source]

Function to unregister DataFrame object by name which was registered with the arkouda server via register().

Parameters:

user_defined_name (str) – Name under which the DataFrame object was registered.

Raises:
  • TypeError – If user_defined_name is not a string.

  • RegistrationError – If there is an issue attempting to unregister any underlying components.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister_dataframe_by_name("my_table_name")
>>> df.is_registered()
False
update_hdf(prefix_path: str, index=False, columns=None, repack: bool = True)[source]

Overwrite the dataset with the name provided with this dataframe. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (List, default=None) – List of columns to include in the file. If None, writes out all columns.

  • repack (bool, default=True) – HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Returns:

Success message if successful.

Return type:

str

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray.

Notes

If file does not contain File_Format attribute to indicate how it was saved,

the file name is checked for _LOCALE#### to determine if it is distributed.

If the dataset provided does not exist, it will be added.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_hdf(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

A

B

0

1

3

1

2

4

>>> df2 = ak.DataFrame({"A":[5,6],"B":[7,8]})
>>> df2.update_hdf(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

A

B

0

5

7

1

6

8

update_nrows()[source]

Computes the number of rows on the arkouda server and updates the size parameter.

class arkouda.DataFrame(initialdata=None, index=None, columns=None)[source]

Bases: collections.UserDict

A DataFrame structure based on arkouda arrays.

Parameters:
  • initialdata (List or dictionary of lists, tuples, or pdarrays) – Each list/dictionary entry corresponds to one column of the data and should be a homogenous type. Different columns may have different types. If using a dictionary, keys should be strings.

  • index (Index, pdarray, or Strings) – Index for the resulting frame. Defaults to an integer range.

  • columns (List, tuple, pdarray, or Strings) – Column labels to use if the data does not include them. Elements must be strings. Defaults to an stringified integer range.

Examples

Create an empty DataFrame and add a column of data:

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame()
>>> df['a'] = ak.array([1,2,3])
>>> display(df)

a

0

1

1

2

2

3

Create a new DataFrame using a dictionary of data:

>>> userName = ak.array(['Alice', 'Bob', 'Alice', 'Carol', 'Bob', 'Alice'])
>>> userID = ak.array([111, 222, 111, 333, 222, 111])
>>> item = ak.array([0, 0, 1, 1, 2, 0])
>>> day = ak.array([5, 5, 6, 5, 6, 6])
>>> amount = ak.array([0.5, 0.6, 1.1, 1.2, 4.3, 0.6])
>>> df = ak.DataFrame({'userName': userName, 'userID': userID,
>>>            'item': item, 'day': day, 'amount': amount})
>>> display(df)

userName

userID

item

day

amount

0

Alice

111

0

5

0.5

1

Bob

222

0

5

0.6

2

Alice

111

1

6

1.1

3

Carol

333

1

5

1.2

4

Bob

222

2

6

4.3

5

Alice

111

0

6

0.6

Indexing works slightly differently than with pandas:

>>> df[0]

keys

values

userName

Alice

userID

111

item

0

day

5

amount

0.5

>>> df['userID']
array([111, 222, 111, 333, 222, 111])
>>> df['userName']
array(['Alice', 'Bob', 'Alice', 'Carol', 'Bob', 'Alice'])
>>> df[ak.array([1,3,5])]

userName

userID

item

day

amount

0

Bob

222

0

5

0.6

1

Carol

333

1

5

1.2

2

Alice

111

0

6

0.6

Compute the stride:

>>> df[1:5:1]

userName

userID

item

day

amount

0

Bob

222

0

5

0.6

1

Alice

111

1

6

1.1

2

Carol

333

1

5

1.2

3

Bob

222

2

6

4.3

>>> df[ak.array([1,2,3])]

userName

userID

item

day

amount

0

Bob

222

0

5

0.6

1

Alice

111

1

6

1.1

2

Carol

333

1

5

1.2

>>> df[['userID', 'day']]

userID

day

0

111

5

1

222

5

2

111

6

3

333

5

4

222

6

5

111

6

property columns

An Index where the values are the column names of the dataframe.

Returns:

The values of the index are the column names of the dataframe.

Return type:

arkouda.index.Index

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df

col1

col2

0

1

3

1

2

4

>>> df.columns
Index(array(['col1', 'col2']), dtype='<U0')
property dtypes

The dtypes of the dataframe.

Returns:

dtypes – The dtypes of the dataframe.

Return type:

arkouda.row.Row

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': ["a", "b"]})
>>> df

col1

col2

0

1

a

1

2

b

>>> df.dtypes

keys

values

col1

int64

col2

str

property empty

Whether the dataframe is empty.

Returns:

True if the dataframe is empty, otherwise False.

Return type:

bool

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({})
>>> df
 0 rows x 0 columns
>>> df.empty
True
property index

The index of the dataframe.

Returns:

The index of the dataframe.

Return type:

arkouda.index.Index or arkouda.index.MultiIndex

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df

col1

col2

0

1

3

1

2

4

>>> df.index
Index(array([0 1]), dtype='int64')
property info

Returns a summary string of this dataframe.

Returns:

A summary string of this dataframe.

Return type:

str

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': ["a", "b"]})
>>> df

col1

col2

0

1

a

1

2

b

>>> df.info
"DataFrame(['col1', 'col2'], 2 rows, 20 B)"
property shape

The shape of the dataframe.

Returns:

Tuple of array dimensions.

Return type:

tuple of int

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df

col1

col2

0

1

4

1

2

5

2

3

6

>>> df.shape
(3, 2)
property size

Returns the number of bytes on the arkouda server.

Returns:

The number of bytes on the arkouda server.

Return type:

int

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df

col1

col2

0

1

4

1

2

5

2

3

6

>>> df.size
6
objType = 'DataFrame'
GroupBy(keys, use_series=False, as_index=True, dropna=True)[source]

Group the dataframe by a column or a list of columns.

Parameters:
  • keys (str or list of str) – An (ordered) list of column names or a single string to group by.

  • use_series (bool, default=False) – If True, returns an arkouda.dataframe.GroupBy object. Otherwise an arkouda.groupbyclass.GroupBy object.

  • as_index (bool, default=True) – If True, groupby columns will be set as index otherwise, the groupby columns will be treated as DataFrame columns.

  • dropna (bool, default=True) – If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Returns:

If use_series = True, returns an arkouda.dataframe.GroupBy object. Otherwise returns an arkouda.groupbyclass.GroupBy object.

Return type:

arkouda.dataframe.GroupBy or arkouda.groupbyclass.GroupBy

See also

arkouda.GroupBy

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1.0, 1.0, 2.0, np.nan], 'col2': [4, 5, 6, 7]})
>>> df

col1

col2

0

1

4

1

1

5

2

2

6

3

nan

7

>>> df.GroupBy("col1")
<arkouda.groupbyclass.GroupBy at 0x7f2cf23e10c0>
>>> df.GroupBy("col1").size()
(array([1.00000000000000000 2.00000000000000000]), array([2 1]))
>>> df.GroupBy("col1",use_series=True)
col1
1.0    2
2.0    1
dtype: int64
>>> df.GroupBy("col1",use_series=True, as_index = False).size()

col1

size

0

1

2

1

2

1

all(axis=0) arkouda.series.Series | bool[source]

Return whether all elements are True, potentially over an axis.

Returns True unless there at least one element along a Dataframe axis that is False.

Currently, will ignore any columns that are not type bool. This is equivalent to the pandas option bool_only=True.

Parameters:

axis ({0 or ‘index’, 1 or ‘columns’, None}, default = 0) –

Indicate which axis or axes should be reduced.

0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.

1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.

None : reduce all axes, return a scalar.

Return type:

arkouda.series.Series or bool

Raises:

ValueError – Raised if axis does not have a value in {0 or ‘index’, 1 or ‘columns’, None}.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"A":[True,True,True,False],"B":[True,True,True,False],
...          "C":[True,False,True,False],"D":[True,True,True,True]})

A

B

C

D

0

True

True

True

True

1

True

True

False

True

2

True

True

True

True

3

False

False

False

True

>>> df.all(axis=0)
A    False
B    False
C    False
D     True
dtype: bool
>>> df.all(axis=1)
0     True
1    False
2     True
3    False
dtype: bool
>>> df.all(axis=None)
False
any(axis=0) arkouda.series.Series | bool[source]

Return whether any element is True, potentially over an axis.

Returns False unless there is at least one element along a Dataframe axis that is True.

Currently, will ignore any columns that are not type bool. This is equivalent to the pandas option bool_only=True.

Parameters:

axis ({0 or ‘index’, 1 or ‘columns’, None}, default = 0) –

Indicate which axis or axes should be reduced.

0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.

1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.

None : reduce all axes, return a scalar.

Return type:

arkouda.series.Series or bool

Raises:

ValueError – Raised if axis does not have a value in {0 or ‘index’, 1 or ‘columns’, None}.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"A":[True,True,True,False],"B":[True,True,True,False],
...          "C":[True,False,True,False],"D":[False,False,False,False]})

A

B

C

D

0

True

True

True

False

1

True

True

False

False

2

True

True

True

False

3

False

False

False

False

>>> df.any(axis=0)
A     True
B     True
C     True
D    False
dtype: bool
>>> df.any(axis=1)
0     True
1     True
2     True
3    False
dtype: bool
>>> df.any(axis=None)
True
append(other, ordered=True)[source]

Concatenate data from ‘other’ onto the end of this DataFrame, in place.

Explicitly, use the arkouda concatenate function to append the data from each column in other to the end of self. This operation is done in place, in the sense that the underlying pdarrays are updated from the result of the arkouda concatenate function, rather than returning a new DataFrame object containing the result.

Parameters:
  • other (DataFrame) – The DataFrame object whose data will be appended to this DataFrame.

  • ordered (bool, default=True) – If False, allow rows to be interleaved for better performance (but data within a row remains together). By default, append all rows to the end, in input order.

Returns:

Appending occurs in-place, but result is returned for compatibility.

Return type:

self

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df1 = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})

col1

col2

0

1

3

1

2

4

>>> df2 = ak.DataFrame({'col1': [3], 'col2': [5]})

col1

col2

0

3

5

>>> df1.append(df2)
>>> df1

col1

col2

0

1

3

1

2

4

2

3

5

apply_permutation(perm)[source]

Apply a permutation to an entire DataFrame. The operation is done in place and the original DataFrame will be modified.

This may be useful if you want to unsort an DataFrame, or even to apply an arbitrary permutation such as the inverse of a sorting permutation.

Parameters:

perm (pdarray) – A permutation array. Should be the same size as the data arrays, and should consist of the integers [0,size-1] in some order. Very minimal testing is done to ensure this is a permutation.

Return type:

None

See also

sort

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})

col1

col2

0

1

4

1

2

5

2

3

6

>>> perm_arry = ak.array([0, 2, 1])
>>> df.apply_permutation(perm_arry)
>>> display(df)

col1

col2

0

1

4

1

3

6

2

2

5

argsort(key, ascending=True)[source]

Return the permutation that sorts the dataframe by key.

Parameters:
  • key (str) – The key to sort on.

  • ascending (bool, default = True) – If true, sort the key in ascending order. Otherwise, sort the key in descending order.

Returns:

The permutation array that sorts the data on key.

Return type:

arkouda.pdarrayclass.pdarray

See also

coargsort

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1.1, 3.1, 2.1], 'col2': [6, 5, 4]})
>>> display(df)

col1

col2

0

1.1

6

1

3.1

5

2

2.1

4

>>> df.argsort('col1')
array([0 2 1])
>>> sorted_df1 = df[df.argsort('col1')]
>>> display(sorted_df1)

col1

col2

0

1.1

6

1

2.1

4

2

3.1

5

>>> df.argsort('col2')
array([2 1 0])
>>> sorted_df2 = df[df.argsort('col2')]
>>> display(sorted_df2)

col1

col2

0

2.1

4

1

3.1

5

2

1.1

6

static attach(user_defined_name: str) DataFrame[source]

Function to return a DataFrame object attached to the registered name in the arkouda server which was registered using register().

Parameters:

user_defined_name (str) – user defined name which DataFrame object was registered under.

Returns:

The DataFrame object created by re-attaching to the corresponding server components.

Return type:

arkouda.dataframe.DataFrame

Raises:

RegistrationError – if user_defined_name is not registered

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
coargsort(keys, ascending=True)[source]

Return the permutation that sorts the dataframe by keys.

Note: Sorting using Strings may not yield correct sort order.

Parameters:

keys (list of str) – The keys to sort on.

Returns:

The permutation array that sorts the data on keys.

Return type:

arkouda.pdarrayclass.pdarray

Example

>>> df = ak.DataFrame({'col1': [2, 2, 1], 'col2': [3, 4, 3], 'col3':[5, 6, 7]})
>>> display(df)

col1

col2

col3

0

2

3

5

1

2

4

6

2

1

3

7

>>> df.coargsort(['col1', 'col2'])
array([2 0 1])
>>>
classmethod concat(items, ordered=True)[source]

Essentially an append, but different formatting.

copy(deep=True)[source]

Make a copy of this object’s data.

When deep = True (default), a new object will be created with a copy of the calling object’s data. Modifications to the data of the copy will not be reflected in the original object.

When deep = False a new object will be created without copying the calling object’s data. Any changes to the data of the original object will be reflected in the shallow copy, and vice versa.

Parameters:

deep (bool, default=True) – When True, return a deep copy. Otherwise, return a shallow copy.

Returns:

A deep or shallow copy according to caller specification.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> display(df)

col1

col2

0

1

3

1

2

4

>>> df_deep = df.copy(deep=True)
>>> df_deep['col1'] +=1
>>> display(df)

col1

col2

0

1

3

1

2

4

>>> df_shallow = df.copy(deep=False)
>>> df_shallow['col1'] +=1
>>> display(df)

col1

col2

0

2

3

1

3

4

corr() DataFrame[source]

Return new DataFrame with pairwise correlation of columns.

Returns:

Arkouda DataFrame containing correlation matrix of all columns.

Return type:

arkouda.dataframe.DataFrame

Raises:

RuntimeError – Raised if there’s a server-side error thrown.

See also

pdarray.corr

Notes

Generates the correlation matrix using Pearson R for all columns.

Attempts to convert to numeric values where possible for inclusion in the matrix.

Example

>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [-1, -2]})
>>> display(df)

col1

col2

0

1

-1

1

2

-2

>>> corr = df.corr()

col1

col2

col1

1

-1

col2

-1

1

count(axis: int | str = 0, numeric_only=False) arkouda.series.Series[source]

Count non-NA cells for each column or row.

The values np.NaN are considered NA.

Parameters:
  • axis ({0 or 'index', 1 or 'columns'}, default 0) – If 0 or ‘index’ counts are generated for each column. If 1 or ‘columns’ counts are generated for each row.

  • numeric_only (bool = False) – Include only float, int or boolean data.

Returns:

For each column/row the number of non-NA/null entries.

Return type:

arkouda.series.Series

Raises:

ValueError – Raised if axis is not 0, 1, ‘index’, or ‘columns’.

See also

GroupBy.count

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame({'col_A': ak.array([7, np.nan]), 'col_B':ak.array([1, 9])})
>>> display(df)

col_A

col_B

0

7

1

1

nan

9

>>> df.count()
col_A    1
col_B    2
dtype: int64
>>> df = ak.DataFrame({'col_A': ak.array(["a","b","c"]), 'col_B':ak.array([1, np.nan, np.nan])})
>>> display(df)

col_A

col_B

0

a

1

1

b

nan

2

c

nan

>>> df.count()
col_A    3
col_B    1
dtype: int64
>>> df.count(numeric_only=True)
col_B    1
dtype: int64
>>> df.count(axis=1)
0    2
1    1
2    1
dtype: int64
drop(keys: str | int | List[str | int], axis: str | int = 0, inplace: bool = False) None | DataFrame[source]

Drop column/s or row/s from the dataframe.

Parameters:
  • keys (str, int or list) – The labels to be dropped on the given axis.

  • axis (int or str) – The axis on which to drop from. 0/’index’ - drop rows, 1/’columns’ - drop columns.

  • inplace (bool, default=False) – When True, perform the operation on the calling object. When False, return a new object.

Returns:

DateFrame when inplace=False; None when inplace=True

Return type:

arkouda.dataframe.DataFrame or None

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> display(df)

col1

col2

0

1

3

1

2

4

Drop column

>>> df.drop('col1', axis = 1)

col2

0

3

1

4

Drop row

>>> df.drop(0, axis = 0)

col1

col2

0

2

4

drop_duplicates(subset=None, keep='first')[source]

Drops duplcated rows and returns resulting DataFrame.

If a subset of the columns are provided then only one instance of each duplicated row will be returned (keep determines which row).

Parameters:
  • subset (Iterable) – Iterable of column names to use to dedupe.

  • keep ({'first', 'last'}, default='first') – Determines which duplicates (if any) to keep.

Returns:

DataFrame with duplicates removed.

Return type:

arkouda.dataframe.DataFrame

Example

>>> df = ak.DataFrame({'col1': [1, 2, 2, 3], 'col2': [4, 5, 5, 6]})
>>> display(df)

col1

col2

0

1

4

1

2

5

2

2

5

3

3

6

>>> df.drop_duplicates()

col1

col2

0

1

4

1

2

5

2

3

6

dropna(axis: int | str = 0, how: str | None = None, thresh: int | None = None, ignore_index: bool = False) DataFrame[source]

Remove missing values.

Parameters:
  • axis ({0 or 'index', 1 or 'columns'}, default = 0) –

    Determine if rows or columns which contain missing values are removed.

    0, or ‘index’: Drop rows which contain missing values.

    1, or ‘columns’: Drop columns which contain missing value.

    Only a single axis is allowed.

  • how ({'any', 'all'}, default='any') –

    Determine if row or column is removed from DataFrame, when we have at least one NA or all NA.

    ’any’: If any NA values are present, drop that row or column.

    ’all’: If all values are NA, drop that row or column.

  • thresh (int, optional) – Require that many non - NA values.Cannot be combined with how.

  • ignore_index (bool, default False) – If True, the resulting axis will be labeled 0, 1, …, n - 1.

Returns:

DataFrame with NA entries dropped from it.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame(
    {
        "A": [True, True, True, True],
        "B": [1, np.nan, 2, np.nan],
        "C": [1, 2, 3, np.nan],
        "D": [False, False, False, False],
        "E": [1, 2, 3, 4],
        "F": ["a", "b", "c", "d"],
        "G": [1, 2, 3, 4],
    }
   )
>>> display(df)

A

B

C

D

E

F

G

0

True

1

1

False

1

a

1

1

True

nan

2

False

2

b

2

2

True

2

3

False

3

c

3

3

True

nan

nan

False

4

d

4

>>> df.dropna()

A

B

C

D

E

F

G

0

True

1

1

False

1

a

1

1

True

2

3

False

3

c

3

>>> df.dropna(axis=1)

A

D

E

F

G

0

True

False

1

a

1

1

True

False

2

b

2

2

True

False

3

c

3

3

True

False

4

d

4

>>> df.dropna(axis=1, thresh=3)

A

C

D

E

F

G

0

True

1

False

1

a

1

1

True

2

False

2

b

2

2

True

3

False

3

c

3

3

True

nan

False

4

d

4

>>> df.dropna(axis=1, how="all")

A

B

C

D

E

F

G

0

True

1

1

False

1

a

1

1

True

nan

2

False

2

b

2

2

True

2

3

False

3

c

3

3

True

nan

nan

False

4

d

4

filter_by_range(keys, low=1, high=None)[source]

Find all rows where the value count of the items in a given set of columns (keys) is within the range [low, high].

To filter by a specific value, set low == high.

Parameters:
  • keys (str or list of str) – The names of the columns to group by.

  • low (int, default=1) – The lowest value count.

  • high (int, default=None) – The highest value count, default to unlimited.

Returns:

An array of boolean values for qualified rows in this DataFrame.

Return type:

arkouda.pdarrayclass.pdarray

Example

>>> df = ak.DataFrame({'col1': [1, 2, 2, 2, 3, 3], 'col2': [4, 5, 6, 7, 8, 9]})
>>> display(df)

col1

col2

0

1

4

1

2

5

2

2

6

3

2

7

4

3

8

5

3

9

>>> df.filter_by_range("col1", low=1, high=2)
array([True False False False True True])
>>> filtered_df = df[df.filter_by_range("col1", low=1, high=2)]
>>> display(filtered_df)

col1

col2

0

1

4

1

3

8

2

3

9

classmethod from_pandas(pd_df)[source]

Copy the data from a pandas DataFrame into a new arkouda.dataframe.DataFrame.

Parameters:

pd_df (pandas.DataFrame) – A pandas DataFrame to convert.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import pandas as pd
>>> pd_df = pd.DataFrame({"A":[1,2],"B":[3,4]})
>>> type(pd_df)
pandas.core.frame.DataFrame
>>> display(pd_df)

A

B

0

1

3

1

2

4

>>> ak_df = DataFrame.from_pandas(pd_df)
>>> type(ak_df)
arkouda.dataframe.DataFrame
>>> display(ak_df)

A

B

0

1

3

1

2

4

classmethod from_return_msg(rep_msg)[source]

Creates a DataFrame object from an arkouda server response message.

Parameters:

rep_msg (string) – Server response message used to create a DataFrame.

Return type:

arkouda.dataframe.DataFrame

groupby(keys, use_series=True, as_index=True, dropna=True)[source]

Group the dataframe by a column or a list of columns. Alias for GroupBy.

Parameters:
  • keys (str or list of str) – An (ordered) list of column names or a single string to group by.

  • use_series (bool, default=True) – If True, returns an arkouda.dataframe.GroupBy object. Otherwise an arkouda.groupbyclass.GroupBy object.

  • as_index (bool, default=True) – If True, groupby columns will be set as index otherwise, the groupby columns will be treated as DataFrame columns.

  • dropna (bool, default=True) – If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Returns:

If use_series = True, returns an arkouda.dataframe.GroupBy object. Otherwise returns an arkouda.groupbyclass.GroupBy object.

Return type:

arkouda.dataframe.GroupBy or arkouda.groupbyclass.GroupBy

See also

arkouda.GroupBy

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [1.0, 1.0, 2.0, np.nan], 'col2': [4, 5, 6, 7]})
>>> df

col1

col2

0

1

4

1

1

5

2

2

6

3

nan

7

>>> df.GroupBy("col1")
<arkouda.groupbyclass.GroupBy at 0x7f2cf23e10c0>
>>> df.GroupBy("col1").size()
(array([1.00000000000000000 2.00000000000000000]), array([2 1]))
>>> df.GroupBy("col1",use_series=True)
col1
1.0    2
2.0    1
dtype: int64
>>> df.GroupBy("col1",use_series=True, as_index = False).size()

col1

size

0

1

2

1

2

1

head(n=5)[source]

Return the first n rows.

This function returns the first n rows of the the dataframe. It is useful for quickly verifying data, for example, after sorting or appending rows.

Parameters:

n (int, default = 5) – Number of rows to select.

Returns:

The first n rows of the DataFrame.

Return type:

arkouda.dataframe.DataFrame

See also

tail

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': ak.arange(10), 'col2': -1 * ak.arange(10)})
>>> display(df)

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

5

5

-5

6

6

-6

7

7

-7

8

8

-8

9

9

-9

>>> df.head()

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

>>> df.head(n=2)

col1

col2

0

0

0

1

1

-1

is_registered() bool[source]

Return True if the object is contained in the registry.

Returns:

Indicates if the object is contained in the registry.

Return type:

bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mismatch of registered components.

Notes

Objects registered with the server are immune to deletion until they are unregistered.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
isin(values: arkouda.pdarrayclass.pdarray | Dict | arkouda.series.Series | DataFrame) DataFrame[source]

Determine whether each element in the DataFrame is contained in values.

Parameters:

values (pdarray, dict, Series, or DataFrame) – The values to check for in DataFrame. Series can only have a single index.

Returns:

Arkouda DataFrame of booleans showing whether each element in the DataFrame is contained in values.

Return type:

arkouda.dataframe.DataFrame

See also

ak.Series.isin

Notes

  • Pandas supports values being an iterable type. In arkouda, we replace this with pdarray.

  • Pandas supports ~ operations. Currently, ak.DataFrame does not support this.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col_A': ak.array([7, 3]), 'col_B':ak.array([1, 9])})
>>> display(df)

col_A

col_B

0

7

1

1

3

9

When values is a pdarray, check every value in the DataFrame to determine if it exists in values.

>>> df.isin(ak.array([0, 1]))

col_A

col_B

0

0

1

1

0

0

When values is a dict, the values in the dict are passed to check the column indicated by the key.

>>> df.isin({'col_A': ak.array([0, 3])})

col_A

col_B

0

0

0

1

1

0

When values is a Series, each column is checked if values is present positionally. This means that for True to be returned, the indexes must be the same.

>>> i = ak.Index(ak.arange(2))
>>> s = ak.Series(data=[3, 9], index=i)
>>> df.isin(s)

col_A

col_B

0

0

0

1

0

1

When values is a DataFrame, the index and column must match. Note that 9 is not found because the column name does not match.

>>> other_df = ak.DataFrame({'col_A':ak.array([7, 3]), 'col_C':ak.array([0, 9])})
>>> df.isin(other_df)

col_A

col_B

0

1

0

1

1

0

isna() DataFrame[source]

Detect missing values.

Return a boolean same-sized object indicating if the values are NA. numpy.NaN values get mapped to True values. Everything else gets mapped to False values.

Returns:

Mask of bool values for each element in DataFrame that indicates whether an element is an NA value.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame({"A": [np.nan, 2, 2, 3], "B": [3, np.nan, 5, 6],
...          "C": [1, np.nan, 2, np.nan], "D":["a","b","c","d"]})
>>> display(df)

A

B

C

D

0

nan

3

1

a

1

2

nan

nan

b

2

2

5

2

c

3

3

6

nan

d

>>> df.isna()
       A      B      C      D
0   True  False  False  False
1  False   True   True  False
2  False  False  False  False
3  False  False   True  False (4 rows x 4 columns)
classmethod load(prefix_path, file_format='INFER')[source]

Load dataframe from file. file_format needed for consistency with other load functions.

Parameters:
  • prefix_path (str) – The prefix path for the data.

  • file_format (string, default = "INFER")

Returns:

A dataframe loaded from the prefix_path.

Return type:

arkouda.dataframe.DataFrame

Examples

To store data in <my_dir>/my_data_LOCALE0000, use “<my_dir>/my_data” as the prefix.

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf5_output','my_data')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A": ak.arange(5), "B": -1 * ak.arange(5)})
>>> df.save(my_path, file_type="distribute")
>>> df.load(my_path)

A

B

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

memory_usage(index=True, unit='B') arkouda.series.Series[source]

Return the memory usage of each column in bytes.

The memory usage can optionally include the contribution of the index.

Parameters:
  • index (bool, default True) – Specifies whether to include the memory usage of the DataFrame’s index in returned Series. If index=True, the memory usage of the index is the first item in the output.

  • unit (str, default = "B") – Unit to return. One of {‘B’, ‘KB’, ‘MB’, ‘GB’}.

Returns:

A Series whose index is the original column names and whose values is the memory usage of each column in bytes.

Return type:

Series

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> dtypes = [ak.int64, ak.float64,  ak.bool]
>>> data = dict([(str(t), ak.ones(5000, dtype=ak.int64).astype(t)) for t in dtypes])
>>> df = ak.DataFrame(data)
>>> display(df.head())

int64

float64

bool

0

1

1

True

1

1

1

True

2

1

1

True

3

1

1

True

4

1

1

True

>>> df.memory_usage()

0

Index

40000

int64

40000

float64

40000

bool

5000

>>> df.memory_usage(index=False)

0

int64

40000

float64

40000

bool

5000

>>> df.memory_usage(unit="KB")

0

Index

39.0625

int64

39.0625

float64

39.0625

bool

4.88281

To get the approximate total memory usage:

>>>  df.memory_usage(index=True).sum()
memory_usage_info(unit='GB')[source]

A formatted string representation of the size of this DataFrame.

Parameters:

unit (str, default = "GB") – Unit to return. One of {‘KB’, ‘MB’, ‘GB’}.

Returns:

A string representation of the number of bytes used by this DataFrame in [unit]s.

Return type:

str

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': ak.arange(1000), 'col2': ak.arange(1000)})
>>> df.memory_usage_info()
'0.00 GB'
>>> df.memory_usage_info(unit="KB")
'15 KB'
merge(right: DataFrame, on: str | List[str] | None = None, how: str = 'inner', left_suffix: str = '_x', right_suffix: str = '_y', convert_ints: bool = True, sort: bool = True) DataFrame[source]

Merge Arkouda DataFrames with a database-style join. The resulting dataframe contains rows from both DataFrames as specified by the merge condition (based on the “how” and “on” parameters).

Based on pandas merge functionality. https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html

Parameters:
  • right (DataFrame) – The Right DataFrame to be joined.

  • on (Optional[Union[str, List[str]]] = None) – The name or list of names of the DataFrame column(s) to join on. If on is None, this defaults to the intersection of the columns in both DataFrames.

  • how ({"inner", "left", "right}, default = "inner") – The merge condition. Must be “inner”, “left”, or “right”.

  • left_suffix (str, default = "_x") – A string indicating the suffix to add to columns from the left dataframe for overlapping column names in both left and right. Defaults to “_x”. Only used when how is “inner”.

  • right_suffix (str, default = "_y") – A string indicating the suffix to add to columns from the right dataframe for overlapping column names in both left and right. Defaults to “_y”. Only used when how is “inner”.

  • convert_ints (bool = True) – If True, convert columns with missing int values (due to the join) to float64. This is to match pandas. If False, do not convert the column dtypes. This has no effect when how = “inner”.

  • sort (bool = True) – If True, DataFrame is returned sorted by “on”. Otherwise, the DataFrame is not sorted.

Returns:

Joined Arkouda DataFrame.

Return type:

arkouda.dataframe.DataFrame

Note

Multiple column joins are only supported for integer columns.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> left_df = ak.DataFrame({'col1': ak.arange(5), 'col2': -1 * ak.arange(5)})
>>> display(left_df)

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

>>> right_df = ak.DataFrame({'col1': 2 * ak.arange(5), 'col2': 2 * ak.arange(5)})
>>> display(right_df)

col1

col2

0

0

0

1

2

2

2

4

4

3

6

6

4

8

8

>>> left_df.merge(right_df, on = "col1")

col1

col2_x

col2_y

0

0

0

0

1

2

-2

2

2

4

-4

4

>>> left_df.merge(right_df, on = "col1", how = "left")

col1

col2_y

col2_x

0

0

0

0

1

1

nan

-1

2

2

2

-2

3

3

nan

-3

4

4

4

-4

>>> left_df.merge(right_df, on = "col1", how = "right")

col1

col2_x

col2_y

0

0

0

0

1

2

-2

2

2

4

-4

4

3

6

nan

6

4

8

nan

8

>>> left_df.merge(right_df, on = "col1", how = "outer")

col1

col2_y

col2_x

0

0

0

0

1

1

nan

-1

2

2

2

-2

3

3

nan

-3

4

4

4

-4

5

6

6

nan

6

8

8

nan

notna() DataFrame[source]

Detect existing (non-missing) values.

Return a boolean same-sized object indicating if the values are not NA. numpy.NaN values get mapped to False values.

Returns:

Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value.

Return type:

arkouda.dataframe.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import numpy as np
>>> df = ak.DataFrame({"A": [np.nan, 2, 2, 3], "B": [3, np.nan, 5, 6],
...          "C": [1, np.nan, 2, np.nan], "D":["a","b","c","d"]})
>>> display(df)

A

B

C

D

0

nan

3

1

a

1

2

nan

nan

b

2

2

5

2

c

3

3

6

nan

d

>>> df.notna()
       A      B      C     D
0  False   True   True  True
1   True  False  False  True
2   True   True   True  True
3   True   True  False  True (4 rows x 4 columns)
classmethod read_csv(filename: str, col_delim: str = ',')[source]

Read the columns of a CSV file into an Arkouda DataFrame. If the file contains the appropriately formatted header, typed data will be returned. Otherwise, all data will be returned as a Strings objects.

Parameters:
  • filename (str) – Filename to read data from.

  • col_delim (str, default=",") – The delimiter for columns within the data.

Returns:

Arkouda DataFrame containing the columns from the CSV file.

Return type:

arkouda.dataframe.DataFrame

Raises:
  • ValueError – Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist.

  • RuntimeError – Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

  • TypeError – Raised if we receive an unknown arkouda_type returned from the server.

See also

to_csv

Notes

  • CSV format is not currently supported by load/load_all operations.

  • The column delimiter is expected to be the same for column names and data.

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (”\n”) at this time.

  • Unlike other file formats, CSV files store Strings as their UTF-8 format instead of storing

bytes as uint(8).

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'csv_output','my_data')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_csv(my_path)
>>> df2 = DataFrame.read_csv(my_path + "_LOCALE0000")
>>> display(df2)

A

B

0

1

3

1

2

4

register(user_defined_name: str) DataFrame[source]

Register this DataFrame object and underlying components with the Arkouda server.

Parameters:

user_defined_name (str) – User defined name the DataFrame is to be registered under. This will be the root name for underlying components.

Returns:

The same DataFrame which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different DataFrames with the same name.

Return type:

arkouda.dataframe.DataFrame

Raises:
  • TypeError – Raised if user_defined_name is not a str.

  • RegistrationError – If the server was unable to register the DataFrame with the user_defined_name.

Notes

Objects registered with the server are immune to deletion until they are unregistered.

Any changes made to a DataFrame object after registering with the server may not be reflected in attached copies.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
rename(mapper: Callable | Dict | None = None, index: Callable | Dict | None = None, column: Callable | Dict | None = None, axis: str | int = 0, inplace: bool = False) DataFrame | None[source]

Rename indexes or columns according to a mapping.

Parameters:
  • mapper (callable or dict-like, Optional) – Function or dictionary mapping existing values to new values. Nonexistent names will not raise an error. Uses the value of axis to determine if renaming column or index

  • column (callable or dict-like, Optional) – Function or dictionary mapping existing column names to new column names. Nonexistent names will not raise an error. When this is set, axis is ignored.

  • index (callable or dict-like, Optional) – Function or dictionary mapping existing index names to new index names. Nonexistent names will not raise an error. When this is set, axis is ignored.

  • axis (int or str, default=0) – Indicates which axis to perform the rename. 0/”index” - Indexes 1/”column” - Columns

  • inplace (bool, default=False) – When True, perform the operation on the calling object. When False, return a new object.

Returns:

DateFrame when inplace=False; None when inplace=True.

Return type:

arkouda.dataframe.DataFrame or None

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"A": ak.array([1, 2, 3]), "B": ak.array([4, 5, 6])})
>>> display(df)

A

B

0

1

4

1

2

5

2

3

6

Rename columns using a mapping:

>>> df.rename(column={'A':'a', 'B':'c'})

a

c

0

1

4

1

2

5

2

3

6

Rename indexes using a mapping:

>>> df.rename(index={0:99, 2:11})

A

B

0

1

4

1

2

5

2

3

6

Rename using an axis style parameter:

>>> df.rename(str.lower, axis='column')

a

b

0

1

4

1

2

5

2

3

6

reset_index(size: int | None = None, inplace: bool = False) None | DataFrame[source]

Set the index to an integer range.

Useful if this dataframe is the result of a slice operation from another dataframe, or if you have permuted the rows and no longer need to keep that ordering on the rows.

Parameters:
  • size (int, optional) – If size is passed, do not attempt to determine size based on existing column sizes. Assume caller handles consistency correctly.

  • inplace (bool, default=False) – When True, perform the operation on the calling object. When False, return a new object.

Returns:

DateFrame when inplace=False; None when inplace=True.

Return type:

arkouda.dataframe.DataFrame or None

Note

Pandas adds a column ‘index’ to indicate the original index. Arkouda does not currently support this behavior.

Example

>>> df = ak.DataFrame({"A": ak.array([1, 2, 3]), "B": ak.array([4, 5, 6])})
>>> display(df)

A

B

0

1

4

1

2

5

2

3

6

>>> perm_df = df[ak.array([0,2,1])]
>>> display(perm_df)

A

B

0

1

4

1

3

6

2

2

5

>>> perm_df.reset_index()

A

B

0

1

4

1

3

6

2

2

5

sample(n=5)[source]

Return a random sample of n rows.

Parameters:

n (int, default=5) – Number of rows to return.

Returns:

The sampled n rows of the DataFrame.

Return type:

arkouda.dataframe.DataFrame

Example

>>> df = ak.DataFrame({"A": ak.arange(5), "B": -1 * ak.arange(5)})
>>> display(df)

A

B

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

Random output of size 3:

>>> df.sample(n=3)

A

B

0

0

0

1

1

-1

2

4

-4

save(path, index=False, columns=None, file_format='HDF5', file_type='distribute', compression: str | None = None)[source]

DEPRECATED Save DataFrame to disk, preserving column names.

Parameters:
  • path (str) – File path to save data.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (list, default=None) – List of columns to include in the file. If None, writes out all columns.

  • file_format (str, default='HDF5') – ‘HDF5’ or ‘Parquet’. Defaults to ‘HDF5’

  • file_type (str, default=distribute) – “single” or “distribute” If single, will right a single file to locale 0.

  • compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Compression type. Only used for Parquet

Notes

This method saves one file per locale of the arkouda server. All files are prefixed by the path argument and suffixed by their locale number.

See also

to_parquet, to_hdf

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf5_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A": ak.arange(5), "B": -1 * ak.arange(5)})
>>> df.save(my_path + '/my_data', file_type="single")
>>> df.load(my_path + '/my_data')

A

B

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

sort_index(ascending=True)[source]

Sort the DataFrame by indexed columns.

Note: Fails on sort order of arkouda.strings.Strings columns when multiple columns being sorted.

Parameters:

ascending (bool, default = True) – Sort values in ascending (default) or descending order.

Example

>>> df = ak.DataFrame({'col1': [1.1, 3.1, 2.1], 'col2': [6, 5, 4]},
...          index = Index(ak.array([2,0,1]), name="idx"))
>>> display(df)

idx

col1

col2

0

1.1

6

1

3.1

5

2

2.1

4

>>> df.sort_index()

idx

col1

col2

0

3.1

5

1

2.1

4

2

1.1

6

sort_values(by=None, ascending=True)[source]

Sort the DataFrame by one or more columns.

If no column is specified, all columns are used.

Note: Fails on order of arkouda.strings.Strings columns when multiple columns being sorted.

Parameters:
  • by (str or list/tuple of str, default = None) – The name(s) of the column(s) to sort by.

  • ascending (bool, default = True) – Sort values in ascending (default) or descending order.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': [2, 2, 1], 'col2': [3, 4, 3], 'col3':[5, 6, 7]})
>>> display(df)

col1

col2

col3

0

2

3

5

1

2

4

6

2

1

3

7

>>> df.sort_values()

col1

col2

col3

0

1

3

7

1

2

3

5

2

2

4

6

>>> df.sort_values("col3")

col1

col2

col3

0

1

3

7

1

2

3

5

2

2

4

6

tail(n=5)[source]

Return the last n rows.

This function returns the last n rows for the dataframe. It is useful for quickly testing if your object has the right type of data in it.

Parameters:

n (int, default=5) – Number of rows to select.

Returns:

The last n rows of the DataFrame.

Return type:

arkouda.dataframe.DataFrame

See also

arkouda.dataframe.head

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({'col1': ak.arange(10), 'col2': -1 * ak.arange(10)})
>>> display(df)

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

5

5

-5

6

6

-6

7

7

-7

8

8

-8

9

9

-9

>>> df.tail()

col1

col2

0

5

-5

1

6

-6

2

7

-7

3

8

-8

4

9

-9

>>> df.tail(n=2)

col1

col2

0

8

-8

1

9

-9

to_csv(path: str, index: bool = False, columns: List[str] | None = None, col_delim: str = ',', overwrite: bool = False)[source]

Writes DataFrame to CSV file(s). File will contain a column for each column in the DataFrame. All CSV Files written by Arkouda include a header denoting data types of the columns. Unlike other file formats, CSV files store Strings as their UTF-8 format instead of storing bytes as uint(8).

Parameters:
  • path (str) – The filename prefix to be used for saving files. Files will have _LOCALE#### appended when they are written to disk.

  • index (bool, default=False) – If True, the index of the DataFrame will be written to the file as a column.

  • columns (list of str (Optional)) – Column names to assign when writing data.

  • col_delim (str, default=",") – Value to be used to separate columns within the file. Please be sure that the value used DOES NOT appear in your dataset.

  • overwrite (bool, default=False) – If True, any existing files matching your provided prefix_path will be overwritten. If False, an error will be returned if existing files are found.

Return type:

None

Raises:
  • ValueError – Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist.

  • RuntimeError – Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

  • TypeError – Raised if we receive an unknown arkouda_type returned from the server.

Notes

  • CSV format is not currently supported by load/load_all operations.

  • The column delimiter is expected to be the same for column names and data.

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (”\n”) at this time.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'csv_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_csv(my_path + "/my_data")
>>> df2 = DataFrame.read_csv(my_path + "/my_data" + "_LOCALE0000")
>>> display(df2)

A

B

0

1

3

1

2

4

to_hdf(path, index=False, columns=None, file_type='distribute')[source]

Save DataFrame to disk as hdf5, preserving column names.

Parameters:
  • path (str) – File path to save data.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (List, default = None) – List of columns to include in the file. If None, writes out all columns.

  • file_type (str (single | distribute), default=distribute) – Whether to save to a single file or distribute across Locales.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray.

Notes

This method saves one file per locale of the arkouda server. All files are prefixed by the path argument and suffixed by their locale number.

See also

to_parquet, load

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_hdf(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

A

B

0

1

3

1

2

4

to_markdown(mode='wt', index=True, tablefmt='grid', storage_options=None, **kwargs)[source]

Print DataFrame in Markdown-friendly format.

Parameters:
  • mode (str, optional) – Mode in which file is opened, “wt” by default.

  • index (bool, optional, default True) – Add index (row) labels.

  • tablefmt (str = "grid") – Table format to call from tablulate: https://pypi.org/project/tabulate/

  • storage_options (dict, optional) – Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. An error will be raised if providing this argument with a non-fsspec URL. See the fsspec and backend storage implementation docs for the set of allowed keys and values.

  • **kwargs – These parameters will be passed to tabulate.

Note

This function should only be called on small DataFrames as it calls pandas.DataFrame.to_markdown: https://pandas.pydata.org/pandas-docs/version/1.2.4/reference/api/pandas.DataFrame.to_markdown.html

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> df = ak.DataFrame({"animal_1": ["elk", "pig"], "animal_2": ["dog", "quetzal"]})
>>> print(df.to_markdown())
+----+------------+------------+
|    | animal_1   | animal_2   |
+====+============+============+
|  0 | elk        | dog        |
+----+------------+------------+
|  1 | pig        | quetzal    |
+----+------------+------------+

Suppress the index:

>>> print(df.to_markdown(index = False))
+------------+------------+
| animal_1   | animal_2   |
+============+============+
| elk        | dog        |
+------------+------------+
| pig        | quetzal    |
+------------+------------+
to_pandas(datalimit=maxTransferBytes, retain_index=False)[source]

Send this DataFrame to a pandas DataFrame.

Parameters:
  • datalimit (int, default=arkouda.client.maxTransferBytes) – The maximum number size, in megabytes to transfer. The requested DataFrame will be converted to a pandas DataFrame only if the estimated size of the DataFrame does not exceed this value.

  • retain_index (bool, default=False) – Normally, to_pandas() creates a new range index object. If you want to keep the index column, set this to True.

Returns:

The result of converting this DataFrame to a pandas DataFrame.

Return type:

pandas.DataFrame

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> ak_df = ak.DataFrame({"A": ak.arange(2), "B": -1 * ak.arange(2)})
>>> type(ak_df)
arkouda.dataframe.DataFrame
>>> display(ak_df)

A

B

0

0

0

1

1

-1

>>> import pandas as pd
>>> pd_df = ak_df.to_pandas()
>>> type(pd_df)
pandas.core.frame.DataFrame
>>> display(pd_df)

A

B

0

0

0

1

1

-1

to_parquet(path, index=False, columns=None, compression: str | None = None, convert_categoricals: bool = False)[source]

Save DataFrame to disk as parquet, preserving column names.

Parameters:
  • path (str) – File path to save data.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (list) – List of columns to include in the file. If None, writes out all columns.

  • compression (str (Optional), default=None) – Provide the compression type to use when writing the file. Supported values: snappy, gzip, brotli, zstd, lz4

  • convert_categoricals (bool, default=False) – Parquet requires all columns to be the same size and Categoricals don’t satisfy that requirement. If set, write the equivalent Strings in place of any Categorical columns.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

This method saves one file per locale of the arkouda server. All files are prefixed by the path argument and suffixed by their locale number.

See also

to_hdf, load

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'parquet_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_parquet(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

B

A

0

3

1

1

4

2

transfer(hostname, port)[source]

Sends a DataFrame to a different Arkouda server.

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the DataFrame is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Returns:

A message indicating a complete transfer.

Return type:

str

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unregister()[source]

Unregister this DataFrame object in the arkouda server which was previously registered using register() and/or attached to using attach().

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister.

Notes

Objects registered with the server are immune to deletion until they are unregistered.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister()
>>> df.is_registered()
False
static unregister_dataframe_by_name(user_defined_name: str) str[source]

Function to unregister DataFrame object by name which was registered with the arkouda server via register().

Parameters:

user_defined_name (str) – Name under which the DataFrame object was registered.

Raises:
  • TypeError – If user_defined_name is not a string.

  • RegistrationError – If there is an issue attempting to unregister any underlying components.

Example

>>> df = ak.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6]})
>>> df.register("my_table_name")
>>> df.attach("my_table_name")
>>> df.is_registered()
True
>>> df.unregister_dataframe_by_name("my_table_name")
>>> df.is_registered()
False
update_hdf(prefix_path: str, index=False, columns=None, repack: bool = True)[source]

Overwrite the dataset with the name provided with this dataframe. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share.

  • index (bool, default=False) – If True, save the index column. By default, do not save the index.

  • columns (List, default=None) – List of columns to include in the file. If None, writes out all columns.

  • repack (bool, default=True) – HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Returns:

Success message if successful.

Return type:

str

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray.

Notes

If file does not contain File_Format attribute to indicate how it was saved,

the file name is checked for _LOCALE#### to determine if it is distributed.

If the dataset provided does not exist, it will be added.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> import os.path
>>> from pathlib import Path
>>> my_path = os.path.join(os.getcwd(), 'hdf_output')
>>> Path(my_path).mkdir(parents=True, exist_ok=True)
>>> df = ak.DataFrame({"A":[1,2],"B":[3,4]})
>>> df.to_hdf(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

A

B

0

1

3

1

2

4

>>> df2 = ak.DataFrame({"A":[5,6],"B":[7,8]})
>>> df2.update_hdf(my_path + "/my_data")
>>> df.load(my_path + "/my_data")

A

B

0

5

7

1

6

8

update_nrows()[source]

Computes the number of rows on the arkouda server and updates the size parameter.

class arkouda.Datetime(pda, unit: str = _BASE_UNIT)[source]

Bases: _AbstractBaseTime

Represents a date and/or time.

Datetime is the Arkouda analog to pandas DatetimeIndex and other timeseries data types.

Parameters:
  • pda (int64 pdarray, pd.DatetimeIndex, pd.Series, or np.datetime64 array)

  • unit (str, default 'ns') –

    For int64 pdarray, denotes the unit of the input. Ignored for pandas and numpy arrays, which carry their own unit. Not case-sensitive; prefixes of full names (like ‘sec’) are accepted.

    Possible values:

    • ’weeks’ or ‘w’

    • ’days’ or ‘d’

    • ’hours’ or ‘h’

    • ’minutes’, ‘m’, or ‘t’

    • ’seconds’ or ‘s’

    • ’milliseconds’, ‘ms’, or ‘l’

    • ’microseconds’, ‘us’, or ‘u’

    • ’nanoseconds’, ‘ns’, or ‘n’

    Unlike in pandas, units cannot be combined or mixed with integers

Notes

The .values attribute is always in nanoseconds with int64 dtype.

property date
property day
property day_of_week
property day_of_year
property dayofweek
property dayofyear
property hour
property is_leap_year
property microsecond
property millisecond
property minute
property month
property nanosecond
property second
property week
property weekday
property weekofyear
property year
special_objType = 'Datetime'
supported_opeq
supported_with_datetime
supported_with_pdarray
supported_with_r_datetime
supported_with_r_pdarray
supported_with_r_timedelta
supported_with_timedelta
is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

isocalendar()[source]
register(user_defined_name)[source]

Register this Datetime object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the Datetime is to be registered under, this will be the root name for underlying components

Returns:

The same Datetime which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Datetimes with the same name.

Return type:

Datetime

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the Datetimes with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

sum()[source]

Return the sum of all elements in the array.

to_pandas()[source]

Convert array to a pandas DatetimeIndex. Note: if the array size exceeds client.maxTransferBytes, a RuntimeError is raised.

See also

to_ndarray

unregister()[source]

Unregister this Datetime object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

class arkouda.Datetime(pda, unit: str = _BASE_UNIT)[source]

Bases: _AbstractBaseTime

Represents a date and/or time.

Datetime is the Arkouda analog to pandas DatetimeIndex and other timeseries data types.

Parameters:
  • pda (int64 pdarray, pd.DatetimeIndex, pd.Series, or np.datetime64 array)

  • unit (str, default 'ns') –

    For int64 pdarray, denotes the unit of the input. Ignored for pandas and numpy arrays, which carry their own unit. Not case-sensitive; prefixes of full names (like ‘sec’) are accepted.

    Possible values:

    • ’weeks’ or ‘w’

    • ’days’ or ‘d’

    • ’hours’ or ‘h’

    • ’minutes’, ‘m’, or ‘t’

    • ’seconds’ or ‘s’

    • ’milliseconds’, ‘ms’, or ‘l’

    • ’microseconds’, ‘us’, or ‘u’

    • ’nanoseconds’, ‘ns’, or ‘n’

    Unlike in pandas, units cannot be combined or mixed with integers

Notes

The .values attribute is always in nanoseconds with int64 dtype.

property date
property day
property day_of_week
property day_of_year
property dayofweek
property dayofyear
property hour
property is_leap_year
property microsecond
property millisecond
property minute
property month
property nanosecond
property second
property week
property weekday
property weekofyear
property year
special_objType = 'Datetime'
supported_opeq
supported_with_datetime
supported_with_pdarray
supported_with_r_datetime
supported_with_r_pdarray
supported_with_r_timedelta
supported_with_timedelta
is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

isocalendar()[source]
register(user_defined_name)[source]

Register this Datetime object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the Datetime is to be registered under, this will be the root name for underlying components

Returns:

The same Datetime which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Datetimes with the same name.

Return type:

Datetime

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the Datetimes with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

sum()[source]

Return the sum of all elements in the array.

to_pandas()[source]

Convert array to a pandas DatetimeIndex. Note: if the array size exceeds client.maxTransferBytes, a RuntimeError is raised.

See also

to_ndarray

unregister()[source]

Unregister this Datetime object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

class arkouda.Datetime(pda, unit: str = _BASE_UNIT)[source]

Bases: _AbstractBaseTime

Represents a date and/or time.

Datetime is the Arkouda analog to pandas DatetimeIndex and other timeseries data types.

Parameters:
  • pda (int64 pdarray, pd.DatetimeIndex, pd.Series, or np.datetime64 array)

  • unit (str, default 'ns') –

    For int64 pdarray, denotes the unit of the input. Ignored for pandas and numpy arrays, which carry their own unit. Not case-sensitive; prefixes of full names (like ‘sec’) are accepted.

    Possible values:

    • ’weeks’ or ‘w’

    • ’days’ or ‘d’

    • ’hours’ or ‘h’

    • ’minutes’, ‘m’, or ‘t’

    • ’seconds’ or ‘s’

    • ’milliseconds’, ‘ms’, or ‘l’

    • ’microseconds’, ‘us’, or ‘u’

    • ’nanoseconds’, ‘ns’, or ‘n’

    Unlike in pandas, units cannot be combined or mixed with integers

Notes

The .values attribute is always in nanoseconds with int64 dtype.

property date
property day
property day_of_week
property day_of_year
property dayofweek
property dayofyear
property hour
property is_leap_year
property microsecond
property millisecond
property minute
property month
property nanosecond
property second
property week
property weekday
property weekofyear
property year
special_objType = 'Datetime'
supported_opeq
supported_with_datetime
supported_with_pdarray
supported_with_r_datetime
supported_with_r_pdarray
supported_with_r_timedelta
supported_with_timedelta
is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

isocalendar()[source]
register(user_defined_name)[source]

Register this Datetime object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the Datetime is to be registered under, this will be the root name for underlying components

Returns:

The same Datetime which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Datetimes with the same name.

Return type:

Datetime

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the Datetimes with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

sum()[source]

Return the sum of all elements in the array.

to_pandas()[source]

Convert array to a pandas DatetimeIndex. Note: if the array size exceeds client.maxTransferBytes, a RuntimeError is raised.

See also

to_ndarray

unregister()[source]

Unregister this Datetime object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

class arkouda.DatetimeAccessor(series)[source]

Bases: Properties

class arkouda.DiffAggregate(gb, series)[source]

A column in a GroupBy that has been differenced. Aggregation operations can be done on the result.

gb

GroupBy object, where the aggregation keys are values of column(s) of a dataframe.

Type:

arkouda.groupbyclass.GroupBy

values

A column to compute the difference on.

Type:

arkouda.series.Series.

class arkouda.ErrorMode[source]

Bases: enum.Enum

Generic enumeration.

Derive from this class to define new enumerations.

ignore = 'ignore'
return_validity = 'return_validity'
strict = 'strict'
class arkouda.Fields(values, names, MSB_left=True, pad='-', separator='', show_int=True)[source]

Bases: BitVector

An integer-backed representation of a set of named binary fields, e.g. flags.

Parameters:
  • values (pdarray or Strings) – The array of field values. If (u)int64, the values are used as-is for the binary representation of fields. If Strings, the values are converted to binary according to the mapping defined by the names and MSB_left arguments.

  • names (str or sequence of str) – The names of the fields, in order. A string will be treated as a list of single-character field names. Multi-character field names are allowed, but must be passed as a list or tuple and user must specify a separator.

  • MSB_left (bool) – Controls how field names are mapped to binary values. If True (default), the left-most field name corresponds to the most significant bit in the binary representation. If False, the left-most field name corresponds to the least significant bit.

  • pad (str) – Character to display when field is not present. Use empty string if no padding is desired.

  • separator (str) – Substring that separates fields. Used to parse input values (if ak.Strings) and to display output.

  • show_int (bool) – If True (default), display the integer value of the binary fields in output.

Returns:

fields – The array of field values

Return type:

Fields

Notes

This class is a thin wrapper around pdarray that mostly affects how values are displayed to the user. Operators and methods will typically treat this class like an int64 pdarray.

format(x)[source]

Format a single binary value as a string of named fields.

opeq(other, op)[source]
arkouda.GROUPBY_REDUCTION_TYPES
class arkouda.Generator(name_dict=None, seed=None, state=1)[source]

Generator exposes a number of methods for generating random numbers drawn from a variety of probability distributions. In addition to the distribution-specific arguments, each method takes a keyword argument size that defaults to None. If size is None, then a single value is generated and returned. If size is an integer, then a 1-D array filled with generated values is returned.

Parameters:
  • seed (int) – Seed to allow for reproducible random number generation.

  • name_dict (dict) – Dictionary mapping the server side names associated with the generators for each dtype.

  • state (int) – The current state we are in the random number generation stream. This information makes it so calls to any dtype generator function affects the stream of random numbers for the other generators. This mimics the behavior we see in numpy

See also

default_rng

Recommended constructor for Generator.

integers(low, high=None, size=None, dtype=akint64, endpoint=False)[source]

Return random integers from low (inclusive) to high (exclusive), or if endpoint=True, low (inclusive) to high (inclusive).

Return random integers from the “discrete uniform” distribution of the specified dtype. If high is None (the default), then results are from 0 to low.

Parameters:
  • low (numeric_scalars) – Lowest (signed) integers to be drawn from the distribution (unless high=None, in which case this parameter is 0 and this value is used for high).

  • high (numeric_scalars) – If provided, one above the largest (signed) integer to be drawn from the distribution (see above for behavior if high=None)

  • size (numeric_scalars) – Output shape. Default is None, in which case a single value is returned.

  • dtype (dtype, optional) – Desired dtype of the result. The default value is ak.int64.

  • endpoint (bool, optional) – If true, sample from the interval [low, high] instead of the default [low, high). Defaults to False

Returns:

Values drawn uniformly from the specified range having the desired dtype, or a single such random int if size not provided.

Return type:

pdarray, numeric_scalar

Examples

>>> rng = ak.random.default_rng()
>>> rng.integers(5, 20, 10)
array([15, 13, 10, 8, 5, 18, 16, 14, 7, 13])  # random
>>> rng.integers(5, size=10)
array([2, 4, 0, 0, 0, 3, 1, 5, 5, 3])  # random
permutation(x)[source]

Randomly permute a sequence, or return a permuted range.

Parameters:

x (int or pdarray) – If x is an integer, randomly permute ak.arange(x). If x is an array, make a copy and shuffle the elements randomly.

Returns:

pdarray of permuted elements

Return type:

pdarray

random(size=None)[source]

Return random floats in the half-open interval [0.0, 1.0).

Results are from the uniform distribution over the stated interval.

Parameters:

size (numeric_scalars, optional) – Output shape. Default is None, in which case a single value is returned.

Returns:

Pdarray of random floats (unless size=None, in which case a single float is returned).

Return type:

pdarray

Notes

To sample over [a,b), use uniform or multiply the output of random by (b - a) and add a:

(b - a) * random() + a

See also

uniform

Examples

>>> rng = ak.random.default_rng()
>>> rng.random()
0.47108547995356098 # random
>>> rng.random(3)
array([0.055256829926011691, 0.62511314008006458, 0.16400145561571539]) # random
shuffle(x)[source]

Randomly shuffle a pdarray in place.

Parameters:

x (pdarray) – shuffle the elements of x randomly in place

Return type:

None

standard_normal(size=None)[source]

Draw samples from a standard Normal distribution (mean=0, stdev=1).

Parameters:

size (numeric_scalars, optional) – Output shape. Default is None, in which case a single value is returned.

Returns:

Pdarray of floats (unless size=None, in which case a single float is returned).

Return type:

pdarray

Notes

For random samples from \(N(\mu, \sigma^2)\), use:

(sigma * standard_normal(size)) + mu

Examples

>>> rng = ak.random.default_rng()
>>> rng.standard_normal()
2.1923875335537315 # random
>>> rng.standard_normal(3)
array([0.8797352989638163, -0.7085325853376141, 0.021728052940979934])  # random
uniform(low=0.0, high=1.0, size=None)[source]

Draw samples from a uniform distribution.

Samples are uniformly distributed over the half-open interval [low, high). In other words, any value within the given interval is equally likely to be drawn by uniform.

Parameters:
  • low (float, optional) – Lower boundary of the output interval. All values generated will be greater than or equal to low. The default value is 0.

  • high (float, optional) – Upper boundary of the output interval. All values generated will be less than high. high must be greater than or equal to low. The default value is 1.0.

  • size (numeric_scalars, optional) – Output shape. Default is None, in which case a single value is returned.

Returns:

Pdarray of floats (unless size=None, in which case a single float is returned).

Return type:

pdarray

See also

integers, random

Examples

>>> rng = ak.random.default_rng()
>>> rng.uniform(-1, 1, 3)
array([0.030785499755523249, 0.08505865366367038, -0.38552048588998722])  # random
class arkouda.GroupBy(keys: groupable | None = None, assume_sorted: bool = False, dropna: bool = True, **kwargs)[source]

Group an array or list of arrays by value, usually in preparation for aggregating the within-group values of another array.

Parameters:
  • keys ((list of) pdarray, Strings, or Categorical) – The array to group by value, or if list, the column arrays to group by row

  • assume_sorted (bool) – If True, assume keys is already sorted (Default: False)

nkeys

The number of key arrays (columns)

Type:

int

size[source]

The length of the input array(s), i.e. number of rows

Type:

int

permutation

The permutation that sorts the keys array(s) by value (row)

Type:

pdarray

unique_keys

The unique values of the keys array(s), in grouped order

Type:

(list of) pdarray, Strings, or Categorical

ngroups

The length of the unique_keys array(s), i.e. number of groups

Type:

int

segments

The start index of each group in the grouped array(s)

Type:

pdarray

logger

Used for all logging operations

Type:

ArkoudaLogger

dropna

If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Type:

bool (default=True)

Raises:

TypeError – Raised if keys is a pdarray with a dtype other than int64

Notes

Integral pdarrays, Strings, and Categoricals are natively supported, but float64 and bool arrays are not.

For a user-defined class to be groupable, it must inherit from pdarray and define or overload the grouping API:

  1. a ._get_grouping_keys() method that returns a list of pdarrays that can be (co)argsorted.

  2. (Optional) a .group() method that returns the permutation that groups the array

If the input is a single array with a .group() method defined, method 2 will be used; otherwise, method 1 will be used.

Reductions
objType = 'GroupBy'
AND(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise AND of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise AND reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with AND

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise AND of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

OR(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise OR of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise OR reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with OR

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise OR of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

XOR(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise XOR of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise XOR reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with XOR

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise XOR of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

aggregate(values: groupable, operator: str, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, groupable][source]

Using the permutation stored in the GroupBy instance, group another array of values and apply a reduction to each group’s values.

Parameters:
  • values (pdarray) – The values to group and reduce

  • operator (str) – The name of the reduction operator to use

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

  • unique_keys (groupable) – The unique keys, in grouped order

  • aggregates (groupable) – One aggregate value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if the requested operator is not supported for the values dtype

Examples

>>> keys = ak.arange(0, 10)
>>> vals = ak.linspace(-1, 1, 10)
>>> g = ak.GroupBy(keys)
>>> g.aggregate(vals, 'sum')
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([-1, -0.77777777777777768,
-0.55555555555555536, -0.33333333333333348, -0.11111111111111116,
0.11111111111111116, 0.33333333333333348, 0.55555555555555536, 0.77777777777777768,
1]))
>>> g.aggregate(vals, 'min')
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([-1, -0.77777777777777779,
-0.55555555555555558, -0.33333333333333337, -0.11111111111111116, 0.11111111111111116,
0.33333333333333326, 0.55555555555555536, 0.77777777777777768, 1]))
all(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and perform an “and” reduction on each group.

Parameters:

values (pdarray, bool) – The values to group and reduce with “and”

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_any (pdarray, bool) – One bool per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not bool

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

any(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and perform an “or” reduction on each group.

Parameters:

values (pdarray, bool) – The values to group and reduce with “or”

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_any (pdarray, bool) – One bool per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not bool

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

argmax(values: arkouda.pdarrayclass.pdarray) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the location of the first maximum of each group’s values.

Parameters:

values (pdarray) – The values to group and find argmax

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_argmaxima (pdarray, int64) – One index per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if argmax is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The returned indices refer to the original values array as passed in, not the permutation applied by the GroupBy instance.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.argmax(b)
(array([2, 3, 4]), array([9, 3, 2]))
argmin(values: arkouda.pdarrayclass.pdarray) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the location of the first minimum of each group’s values.

Parameters:

values (pdarray) – The values to group and find argmin

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_argminima (pdarray, int64) – One index per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if argmax is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if argmin is not supported for the values dtype

Notes

The returned indices refer to the original values array as passed in, not the permutation applied by the GroupBy instance.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.argmin(b)
(array([2, 3, 4]), array([5, 4, 2]))
static attach(user_defined_name: str) GroupBy[source]

Function to return a GroupBy object attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which GroupBy object was registered under

Returns:

The GroupBy object created by re-attaching to the corresponding server components

Return type:

GroupBy

Raises:

RegistrationError – if user_defined_name is not registered

broadcast(values: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings, permute: bool = True) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Fill each group’s segment with a constant value.

Parameters:
  • values (pdarray, Strings) – The values to put in each group’s segment

  • permute (bool) – If True (default), permute broadcast values back to the ordering of the original array on which GroupBy was called. If False, the broadcast values are grouped by value.

Returns:

The broadcasted values

Return type:

pdarray, Strings

Raises:
  • TypeError – Raised if value is not a pdarray object

  • ValueError – Raised if the values array does not have one value per segment

Notes

This function is a sparse analog of np.broadcast. If a GroupBy object represents a sparse matrix (tensor), then this function takes a (dense) column vector and replicates each value to the non-zero elements in the corresponding row.

Examples

>>> a = ak.array([0, 1, 0, 1, 0])
>>> values = ak.array([3, 5])
>>> g = ak.GroupBy(a)
# By default, result is in original order
>>> g.broadcast(values)
array([3, 5, 3, 5, 3])
# With permute=False, result is in grouped order
>>> g.broadcast(values, permute=False)
array([3, 3, 3, 5, 5]
>>> a = ak.randint(1,5,10)
>>> a
array([3, 1, 4, 4, 4, 1, 3, 3, 2, 2])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.count()
>>> g.broadcast(counts > 2)
array([True False True True True False True True False False])
>>> g.broadcast(counts == 3)
array([True False True True True False True True False False])
>>> g.broadcast(counts < 4)
array([True True True True True True True True True True])
static build_from_components(user_defined_name: str | None = None, **kwargs) GroupBy[source]

function to build a new GroupBy object from component keys and permutation.

Parameters:
  • user_defined_name (str (Optional) Passing a name will init the new GroupBy) – and assign it the given name

  • kwargs (dict Dictionary of components required for rebuilding the GroupBy.) – Expected keys are “orig_keys”, “permutation”, “unique_keys”, and “segments”

Returns:

The GroupBy object created by using the given components

Return type:

GroupBy

count() Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Count the number of elements in each group, i.e. the number of times each key appears. This counts the total number of rows (including NaN values).

Parameters:

none

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • counts (pdarray, int64) – The number of times each unique key appears

Notes

This alias is an alias of “size”.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 2, 3, 1, 2, 4, 3, 4, 3, 4])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.count()
>>> keys
array([1, 2, 3, 4])
>>> counts
array([1, 2, 4, 3])
first(values: groupable_element_type) Tuple[groupable, groupable_element_type][source]

First value in each group.

Parameters:

values (pdarray-like) – The values from which to take the first of each group

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result (pdarray-like) – The first value of each group

static from_return_msg(rep_msg)[source]
is_registered() bool[source]

Return True if the object is contained in the registry

Returns:

Indicates if the object is contained in the registry

Return type:

bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mismatch of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

max(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the maximum of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find maxima

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_maxima (pdarray) – One maximum per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if max is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if max is not supported for the values dtype

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.max(b)
(array([2, 3, 4]), array([4, 4, 3]))
mean(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the mean of each group’s values.

Parameters:
  • values (pdarray) – The values to group and average

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_means (pdarray, float64) – One mean value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.mean(b)
(array([2, 3, 4]), array([2.6666666666666665, 2.7999999999999998, 3]))
median(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the median of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find median

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_medians (pdarray, float64) – One median value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,9)
>>> a
array([4 1 4 3 2 2 2 3 3])
>>> g = ak.GroupBy(a)
>>> g.keys
array([4 1 4 3 2 2 2 3 3])
>>> b = ak.linspace(-5,5,9)
>>> b
array([-5 -3.75 -2.5 -1.25 0 1.25 2.5 3.75 5])
>>> g.median(b)
(array([1 2 3 4]), array([-3.75 1.25 3.75 -3.75]))
min(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the minimum of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find minima

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_minima (pdarray) – One minimum per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if min is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if min is not supported for the values dtype

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.min(b)
(array([2, 3, 4]), array([1, 1, 3]))
mode(values: groupable) Tuple[groupable, groupable][source]

Most common value in each group. If a group is multi-modal, return the modal value that occurs first.

Parameters:

values ((list of) pdarray-like) – The values from which to take the mode of each group

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result ((list of) pdarray-like) – The most common value of each group

most_common(values)[source]

(Deprecated) See GroupBy.mode().

nunique(values: groupable) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the number of unique values in each group.

Parameters:

values (pdarray, int64) – The values to group and find unique values

Returns:

  • unique_keys (groupable) – The unique keys, in grouped order

  • group_nunique (groupable) – Number of unique values per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the dtype(s) of values array(s) does/do not support the nunique method

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if nunique is not supported for the values dtype

Examples

>>> data = ak.array([3, 4, 3, 1, 1, 4, 3, 4, 1, 4])
>>> data
array([3, 4, 3, 1, 1, 4, 3, 4, 1, 4])
>>> labels = ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> labels
ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> g = ak.GroupBy(labels)
>>> g.keys
ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> g.nunique(data)
array([1,2,3,4]), array([2, 2, 3, 1])
#    Group (1,1,1) has values [3,4,3] -> there are 2 unique values 3&4
#    Group (2,2,2) has values [1,1,4] -> 2 unique values 1&4
#    Group (3,3,3) has values [3,4,1] -> 3 unique values
#    Group (4) has values [4] -> 1 unique value
prod(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the product of each group’s values.

Parameters:
  • values (pdarray) – The values to group and multiply

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_products (pdarray, float64) – One product per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if prod is not supported for the values dtype

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.prod(b)
(array([2, 3, 4]), array([12, 108.00000000000003, 8.9999999999999982]))
register(user_defined_name: str) GroupBy[source]

Register this GroupBy object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the GroupBy is to be registered under, this will be the root name for underlying components

Returns:

The same GroupBy which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different GroupBys with the same name.

Return type:

GroupBy

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the GroupBy with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

size() Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Count the number of elements in each group, i.e. the number of times each key appears. This counts the total number of rows (including NaN values).

Parameters:

none

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • counts (pdarray, int64) – The number of times each unique key appears

See also

count

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 2, 3, 1, 2, 4, 3, 4, 3, 4])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.size()
>>> keys
array([1, 2, 3, 4])
>>> counts
array([1, 2, 4, 3])
std(values: arkouda.pdarrayclass.pdarray, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the standard deviation of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find standard deviation

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_stds (pdarray, float64) – One std value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

The standard deviation is the square root of the average of the squared deviations from the mean, i.e., std = sqrt(mean((x - x.mean())**2)).

The average squared deviation is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of the infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with ddof=1, it will not be an unbiased estimate of the standard deviation per se.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.std(b)
(array([2 3 4]), array([1.5275252316519465 1.0954451150103321 0]))
sum(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and sum each group’s values.

Parameters:

values (pdarray) – The values to group and sum

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_sums (pdarray) – One sum per unique key in the GroupBy instance

  • skipna (bool) – boolean which determines if NANs should be skipped

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The grouped sum of a boolean pdarray returns integers.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.sum(b)
(array([2, 3, 4]), array([8, 14, 6]))
to_hdf(prefix_path, dataset='groupby', mode='truncate', file_type='distribute')[source]

Save the GroupBy to HDF5. The result is a collection of HDF5 files, one file per locale of the arkouda server, where each filename starts with prefix_path.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files will share

  • dataset (str) – Name prefix for saved data within the HDF5 file

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, add data as a new column to existing files.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Returns:

  • None

  • GroupBy is not currently supported by Parquet

unique(values: groupable)[source]

Return the set of unique values in each group, as a SegArray.

Parameters:

values ((list of) pdarray-like) – The values to unique

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result ((list of) SegArray) – The unique values of each group

Raises:

TypeError – Raised if values is or contains Strings or Categorical

unregister()[source]

Unregister this GroupBy object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

static unregister_groupby_by_name(user_defined_name: str) None[source]

Function to unregister GroupBy object by name which was registered with the arkouda server via register()

Parameters:

user_defined_name (str) – Name under which the GroupBy object was registered

Raises:
  • TypeError – if user_defined_name is not a string

  • RegistrationError – if there is an issue attempting to unregister any underlying components

update_hdf(prefix_path: str, dataset: str = 'groupby', repack: bool = True)[source]
var(values: arkouda.pdarrayclass.pdarray, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the variance of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find variance

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating var

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_vars (pdarray, float64) – One var value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

The variance is the average of the squared deviations from the mean, i.e., var = mean((x - x.mean())**2).

The mean is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.var(b)
(array([2 3 4]), array([2.333333333333333 1.2 0]))
class arkouda.GroupBy(keys: groupable | None = None, assume_sorted: bool = False, dropna: bool = True, **kwargs)[source]

Group an array or list of arrays by value, usually in preparation for aggregating the within-group values of another array.

Parameters:
  • keys ((list of) pdarray, Strings, or Categorical) – The array to group by value, or if list, the column arrays to group by row

  • assume_sorted (bool) – If True, assume keys is already sorted (Default: False)

nkeys

The number of key arrays (columns)

Type:

int

size[source]

The length of the input array(s), i.e. number of rows

Type:

int

permutation

The permutation that sorts the keys array(s) by value (row)

Type:

pdarray

unique_keys

The unique values of the keys array(s), in grouped order

Type:

(list of) pdarray, Strings, or Categorical

ngroups

The length of the unique_keys array(s), i.e. number of groups

Type:

int

segments

The start index of each group in the grouped array(s)

Type:

pdarray

logger

Used for all logging operations

Type:

ArkoudaLogger

dropna

If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Type:

bool (default=True)

Raises:

TypeError – Raised if keys is a pdarray with a dtype other than int64

Notes

Integral pdarrays, Strings, and Categoricals are natively supported, but float64 and bool arrays are not.

For a user-defined class to be groupable, it must inherit from pdarray and define or overload the grouping API:

  1. a ._get_grouping_keys() method that returns a list of pdarrays that can be (co)argsorted.

  2. (Optional) a .group() method that returns the permutation that groups the array

If the input is a single array with a .group() method defined, method 2 will be used; otherwise, method 1 will be used.

Reductions
objType = 'GroupBy'
AND(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise AND of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise AND reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with AND

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise AND of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

OR(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise OR of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise OR reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with OR

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise OR of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

XOR(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise XOR of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise XOR reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with XOR

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise XOR of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

aggregate(values: groupable, operator: str, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, groupable][source]

Using the permutation stored in the GroupBy instance, group another array of values and apply a reduction to each group’s values.

Parameters:
  • values (pdarray) – The values to group and reduce

  • operator (str) – The name of the reduction operator to use

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

  • unique_keys (groupable) – The unique keys, in grouped order

  • aggregates (groupable) – One aggregate value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if the requested operator is not supported for the values dtype

Examples

>>> keys = ak.arange(0, 10)
>>> vals = ak.linspace(-1, 1, 10)
>>> g = ak.GroupBy(keys)
>>> g.aggregate(vals, 'sum')
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([-1, -0.77777777777777768,
-0.55555555555555536, -0.33333333333333348, -0.11111111111111116,
0.11111111111111116, 0.33333333333333348, 0.55555555555555536, 0.77777777777777768,
1]))
>>> g.aggregate(vals, 'min')
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([-1, -0.77777777777777779,
-0.55555555555555558, -0.33333333333333337, -0.11111111111111116, 0.11111111111111116,
0.33333333333333326, 0.55555555555555536, 0.77777777777777768, 1]))
all(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and perform an “and” reduction on each group.

Parameters:

values (pdarray, bool) – The values to group and reduce with “and”

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_any (pdarray, bool) – One bool per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not bool

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

any(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and perform an “or” reduction on each group.

Parameters:

values (pdarray, bool) – The values to group and reduce with “or”

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_any (pdarray, bool) – One bool per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not bool

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

argmax(values: arkouda.pdarrayclass.pdarray) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the location of the first maximum of each group’s values.

Parameters:

values (pdarray) – The values to group and find argmax

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_argmaxima (pdarray, int64) – One index per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if argmax is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The returned indices refer to the original values array as passed in, not the permutation applied by the GroupBy instance.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.argmax(b)
(array([2, 3, 4]), array([9, 3, 2]))
argmin(values: arkouda.pdarrayclass.pdarray) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the location of the first minimum of each group’s values.

Parameters:

values (pdarray) – The values to group and find argmin

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_argminima (pdarray, int64) – One index per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if argmax is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if argmin is not supported for the values dtype

Notes

The returned indices refer to the original values array as passed in, not the permutation applied by the GroupBy instance.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.argmin(b)
(array([2, 3, 4]), array([5, 4, 2]))
static attach(user_defined_name: str) GroupBy[source]

Function to return a GroupBy object attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which GroupBy object was registered under

Returns:

The GroupBy object created by re-attaching to the corresponding server components

Return type:

GroupBy

Raises:

RegistrationError – if user_defined_name is not registered

broadcast(values: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings, permute: bool = True) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Fill each group’s segment with a constant value.

Parameters:
  • values (pdarray, Strings) – The values to put in each group’s segment

  • permute (bool) – If True (default), permute broadcast values back to the ordering of the original array on which GroupBy was called. If False, the broadcast values are grouped by value.

Returns:

The broadcasted values

Return type:

pdarray, Strings

Raises:
  • TypeError – Raised if value is not a pdarray object

  • ValueError – Raised if the values array does not have one value per segment

Notes

This function is a sparse analog of np.broadcast. If a GroupBy object represents a sparse matrix (tensor), then this function takes a (dense) column vector and replicates each value to the non-zero elements in the corresponding row.

Examples

>>> a = ak.array([0, 1, 0, 1, 0])
>>> values = ak.array([3, 5])
>>> g = ak.GroupBy(a)
# By default, result is in original order
>>> g.broadcast(values)
array([3, 5, 3, 5, 3])
# With permute=False, result is in grouped order
>>> g.broadcast(values, permute=False)
array([3, 3, 3, 5, 5]
>>> a = ak.randint(1,5,10)
>>> a
array([3, 1, 4, 4, 4, 1, 3, 3, 2, 2])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.count()
>>> g.broadcast(counts > 2)
array([True False True True True False True True False False])
>>> g.broadcast(counts == 3)
array([True False True True True False True True False False])
>>> g.broadcast(counts < 4)
array([True True True True True True True True True True])
static build_from_components(user_defined_name: str | None = None, **kwargs) GroupBy[source]

function to build a new GroupBy object from component keys and permutation.

Parameters:
  • user_defined_name (str (Optional) Passing a name will init the new GroupBy) – and assign it the given name

  • kwargs (dict Dictionary of components required for rebuilding the GroupBy.) – Expected keys are “orig_keys”, “permutation”, “unique_keys”, and “segments”

Returns:

The GroupBy object created by using the given components

Return type:

GroupBy

count() Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Count the number of elements in each group, i.e. the number of times each key appears. This counts the total number of rows (including NaN values).

Parameters:

none

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • counts (pdarray, int64) – The number of times each unique key appears

Notes

This alias is an alias of “size”.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 2, 3, 1, 2, 4, 3, 4, 3, 4])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.count()
>>> keys
array([1, 2, 3, 4])
>>> counts
array([1, 2, 4, 3])
first(values: groupable_element_type) Tuple[groupable, groupable_element_type][source]

First value in each group.

Parameters:

values (pdarray-like) – The values from which to take the first of each group

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result (pdarray-like) – The first value of each group

static from_return_msg(rep_msg)[source]
is_registered() bool[source]

Return True if the object is contained in the registry

Returns:

Indicates if the object is contained in the registry

Return type:

bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mismatch of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

max(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the maximum of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find maxima

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_maxima (pdarray) – One maximum per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if max is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if max is not supported for the values dtype

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.max(b)
(array([2, 3, 4]), array([4, 4, 3]))
mean(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the mean of each group’s values.

Parameters:
  • values (pdarray) – The values to group and average

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_means (pdarray, float64) – One mean value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.mean(b)
(array([2, 3, 4]), array([2.6666666666666665, 2.7999999999999998, 3]))
median(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the median of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find median

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_medians (pdarray, float64) – One median value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,9)
>>> a
array([4 1 4 3 2 2 2 3 3])
>>> g = ak.GroupBy(a)
>>> g.keys
array([4 1 4 3 2 2 2 3 3])
>>> b = ak.linspace(-5,5,9)
>>> b
array([-5 -3.75 -2.5 -1.25 0 1.25 2.5 3.75 5])
>>> g.median(b)
(array([1 2 3 4]), array([-3.75 1.25 3.75 -3.75]))
min(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the minimum of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find minima

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_minima (pdarray) – One minimum per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if min is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if min is not supported for the values dtype

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.min(b)
(array([2, 3, 4]), array([1, 1, 3]))
mode(values: groupable) Tuple[groupable, groupable][source]

Most common value in each group. If a group is multi-modal, return the modal value that occurs first.

Parameters:

values ((list of) pdarray-like) – The values from which to take the mode of each group

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result ((list of) pdarray-like) – The most common value of each group

most_common(values)[source]

(Deprecated) See GroupBy.mode().

nunique(values: groupable) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the number of unique values in each group.

Parameters:

values (pdarray, int64) – The values to group and find unique values

Returns:

  • unique_keys (groupable) – The unique keys, in grouped order

  • group_nunique (groupable) – Number of unique values per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the dtype(s) of values array(s) does/do not support the nunique method

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if nunique is not supported for the values dtype

Examples

>>> data = ak.array([3, 4, 3, 1, 1, 4, 3, 4, 1, 4])
>>> data
array([3, 4, 3, 1, 1, 4, 3, 4, 1, 4])
>>> labels = ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> labels
ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> g = ak.GroupBy(labels)
>>> g.keys
ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> g.nunique(data)
array([1,2,3,4]), array([2, 2, 3, 1])
#    Group (1,1,1) has values [3,4,3] -> there are 2 unique values 3&4
#    Group (2,2,2) has values [1,1,4] -> 2 unique values 1&4
#    Group (3,3,3) has values [3,4,1] -> 3 unique values
#    Group (4) has values [4] -> 1 unique value
prod(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the product of each group’s values.

Parameters:
  • values (pdarray) – The values to group and multiply

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_products (pdarray, float64) – One product per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if prod is not supported for the values dtype

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.prod(b)
(array([2, 3, 4]), array([12, 108.00000000000003, 8.9999999999999982]))
register(user_defined_name: str) GroupBy[source]

Register this GroupBy object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the GroupBy is to be registered under, this will be the root name for underlying components

Returns:

The same GroupBy which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different GroupBys with the same name.

Return type:

GroupBy

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the GroupBy with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

size() Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Count the number of elements in each group, i.e. the number of times each key appears. This counts the total number of rows (including NaN values).

Parameters:

none

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • counts (pdarray, int64) – The number of times each unique key appears

See also

count

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 2, 3, 1, 2, 4, 3, 4, 3, 4])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.size()
>>> keys
array([1, 2, 3, 4])
>>> counts
array([1, 2, 4, 3])
std(values: arkouda.pdarrayclass.pdarray, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the standard deviation of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find standard deviation

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_stds (pdarray, float64) – One std value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

The standard deviation is the square root of the average of the squared deviations from the mean, i.e., std = sqrt(mean((x - x.mean())**2)).

The average squared deviation is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of the infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with ddof=1, it will not be an unbiased estimate of the standard deviation per se.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.std(b)
(array([2 3 4]), array([1.5275252316519465 1.0954451150103321 0]))
sum(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and sum each group’s values.

Parameters:

values (pdarray) – The values to group and sum

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_sums (pdarray) – One sum per unique key in the GroupBy instance

  • skipna (bool) – boolean which determines if NANs should be skipped

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The grouped sum of a boolean pdarray returns integers.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.sum(b)
(array([2, 3, 4]), array([8, 14, 6]))
to_hdf(prefix_path, dataset='groupby', mode='truncate', file_type='distribute')[source]

Save the GroupBy to HDF5. The result is a collection of HDF5 files, one file per locale of the arkouda server, where each filename starts with prefix_path.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files will share

  • dataset (str) – Name prefix for saved data within the HDF5 file

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, add data as a new column to existing files.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Returns:

  • None

  • GroupBy is not currently supported by Parquet

unique(values: groupable)[source]

Return the set of unique values in each group, as a SegArray.

Parameters:

values ((list of) pdarray-like) – The values to unique

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result ((list of) SegArray) – The unique values of each group

Raises:

TypeError – Raised if values is or contains Strings or Categorical

unregister()[source]

Unregister this GroupBy object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

static unregister_groupby_by_name(user_defined_name: str) None[source]

Function to unregister GroupBy object by name which was registered with the arkouda server via register()

Parameters:

user_defined_name (str) – Name under which the GroupBy object was registered

Raises:
  • TypeError – if user_defined_name is not a string

  • RegistrationError – if there is an issue attempting to unregister any underlying components

update_hdf(prefix_path: str, dataset: str = 'groupby', repack: bool = True)[source]
var(values: arkouda.pdarrayclass.pdarray, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the variance of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find variance

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating var

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_vars (pdarray, float64) – One var value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

The variance is the average of the squared deviations from the mean, i.e., var = mean((x - x.mean())**2).

The mean is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.var(b)
(array([2 3 4]), array([2.333333333333333 1.2 0]))
class arkouda.GroupBy(keys: groupable | None = None, assume_sorted: bool = False, dropna: bool = True, **kwargs)[source]

Group an array or list of arrays by value, usually in preparation for aggregating the within-group values of another array.

Parameters:
  • keys ((list of) pdarray, Strings, or Categorical) – The array to group by value, or if list, the column arrays to group by row

  • assume_sorted (bool) – If True, assume keys is already sorted (Default: False)

nkeys

The number of key arrays (columns)

Type:

int

size[source]

The length of the input array(s), i.e. number of rows

Type:

int

permutation

The permutation that sorts the keys array(s) by value (row)

Type:

pdarray

unique_keys

The unique values of the keys array(s), in grouped order

Type:

(list of) pdarray, Strings, or Categorical

ngroups

The length of the unique_keys array(s), i.e. number of groups

Type:

int

segments

The start index of each group in the grouped array(s)

Type:

pdarray

logger

Used for all logging operations

Type:

ArkoudaLogger

dropna

If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Type:

bool (default=True)

Raises:

TypeError – Raised if keys is a pdarray with a dtype other than int64

Notes

Integral pdarrays, Strings, and Categoricals are natively supported, but float64 and bool arrays are not.

For a user-defined class to be groupable, it must inherit from pdarray and define or overload the grouping API:

  1. a ._get_grouping_keys() method that returns a list of pdarrays that can be (co)argsorted.

  2. (Optional) a .group() method that returns the permutation that groups the array

If the input is a single array with a .group() method defined, method 2 will be used; otherwise, method 1 will be used.

Reductions
objType = 'GroupBy'
AND(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise AND of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise AND reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with AND

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise AND of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

OR(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise OR of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise OR reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with OR

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise OR of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

XOR(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise XOR of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise XOR reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with XOR

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise XOR of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

aggregate(values: groupable, operator: str, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, groupable][source]

Using the permutation stored in the GroupBy instance, group another array of values and apply a reduction to each group’s values.

Parameters:
  • values (pdarray) – The values to group and reduce

  • operator (str) – The name of the reduction operator to use

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

  • unique_keys (groupable) – The unique keys, in grouped order

  • aggregates (groupable) – One aggregate value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if the requested operator is not supported for the values dtype

Examples

>>> keys = ak.arange(0, 10)
>>> vals = ak.linspace(-1, 1, 10)
>>> g = ak.GroupBy(keys)
>>> g.aggregate(vals, 'sum')
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([-1, -0.77777777777777768,
-0.55555555555555536, -0.33333333333333348, -0.11111111111111116,
0.11111111111111116, 0.33333333333333348, 0.55555555555555536, 0.77777777777777768,
1]))
>>> g.aggregate(vals, 'min')
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([-1, -0.77777777777777779,
-0.55555555555555558, -0.33333333333333337, -0.11111111111111116, 0.11111111111111116,
0.33333333333333326, 0.55555555555555536, 0.77777777777777768, 1]))
all(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and perform an “and” reduction on each group.

Parameters:

values (pdarray, bool) – The values to group and reduce with “and”

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_any (pdarray, bool) – One bool per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not bool

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

any(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and perform an “or” reduction on each group.

Parameters:

values (pdarray, bool) – The values to group and reduce with “or”

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_any (pdarray, bool) – One bool per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not bool

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

argmax(values: arkouda.pdarrayclass.pdarray) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the location of the first maximum of each group’s values.

Parameters:

values (pdarray) – The values to group and find argmax

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_argmaxima (pdarray, int64) – One index per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if argmax is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The returned indices refer to the original values array as passed in, not the permutation applied by the GroupBy instance.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.argmax(b)
(array([2, 3, 4]), array([9, 3, 2]))
argmin(values: arkouda.pdarrayclass.pdarray) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the location of the first minimum of each group’s values.

Parameters:

values (pdarray) – The values to group and find argmin

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_argminima (pdarray, int64) – One index per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if argmax is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if argmin is not supported for the values dtype

Notes

The returned indices refer to the original values array as passed in, not the permutation applied by the GroupBy instance.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.argmin(b)
(array([2, 3, 4]), array([5, 4, 2]))
static attach(user_defined_name: str) GroupBy[source]

Function to return a GroupBy object attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which GroupBy object was registered under

Returns:

The GroupBy object created by re-attaching to the corresponding server components

Return type:

GroupBy

Raises:

RegistrationError – if user_defined_name is not registered

broadcast(values: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings, permute: bool = True) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Fill each group’s segment with a constant value.

Parameters:
  • values (pdarray, Strings) – The values to put in each group’s segment

  • permute (bool) – If True (default), permute broadcast values back to the ordering of the original array on which GroupBy was called. If False, the broadcast values are grouped by value.

Returns:

The broadcasted values

Return type:

pdarray, Strings

Raises:
  • TypeError – Raised if value is not a pdarray object

  • ValueError – Raised if the values array does not have one value per segment

Notes

This function is a sparse analog of np.broadcast. If a GroupBy object represents a sparse matrix (tensor), then this function takes a (dense) column vector and replicates each value to the non-zero elements in the corresponding row.

Examples

>>> a = ak.array([0, 1, 0, 1, 0])
>>> values = ak.array([3, 5])
>>> g = ak.GroupBy(a)
# By default, result is in original order
>>> g.broadcast(values)
array([3, 5, 3, 5, 3])
# With permute=False, result is in grouped order
>>> g.broadcast(values, permute=False)
array([3, 3, 3, 5, 5]
>>> a = ak.randint(1,5,10)
>>> a
array([3, 1, 4, 4, 4, 1, 3, 3, 2, 2])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.count()
>>> g.broadcast(counts > 2)
array([True False True True True False True True False False])
>>> g.broadcast(counts == 3)
array([True False True True True False True True False False])
>>> g.broadcast(counts < 4)
array([True True True True True True True True True True])
static build_from_components(user_defined_name: str | None = None, **kwargs) GroupBy[source]

function to build a new GroupBy object from component keys and permutation.

Parameters:
  • user_defined_name (str (Optional) Passing a name will init the new GroupBy) – and assign it the given name

  • kwargs (dict Dictionary of components required for rebuilding the GroupBy.) – Expected keys are “orig_keys”, “permutation”, “unique_keys”, and “segments”

Returns:

The GroupBy object created by using the given components

Return type:

GroupBy

count() Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Count the number of elements in each group, i.e. the number of times each key appears. This counts the total number of rows (including NaN values).

Parameters:

none

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • counts (pdarray, int64) – The number of times each unique key appears

Notes

This alias is an alias of “size”.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 2, 3, 1, 2, 4, 3, 4, 3, 4])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.count()
>>> keys
array([1, 2, 3, 4])
>>> counts
array([1, 2, 4, 3])
first(values: groupable_element_type) Tuple[groupable, groupable_element_type][source]

First value in each group.

Parameters:

values (pdarray-like) – The values from which to take the first of each group

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result (pdarray-like) – The first value of each group

static from_return_msg(rep_msg)[source]
is_registered() bool[source]

Return True if the object is contained in the registry

Returns:

Indicates if the object is contained in the registry

Return type:

bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mismatch of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

max(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the maximum of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find maxima

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_maxima (pdarray) – One maximum per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if max is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if max is not supported for the values dtype

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.max(b)
(array([2, 3, 4]), array([4, 4, 3]))
mean(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the mean of each group’s values.

Parameters:
  • values (pdarray) – The values to group and average

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_means (pdarray, float64) – One mean value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.mean(b)
(array([2, 3, 4]), array([2.6666666666666665, 2.7999999999999998, 3]))
median(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the median of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find median

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_medians (pdarray, float64) – One median value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,9)
>>> a
array([4 1 4 3 2 2 2 3 3])
>>> g = ak.GroupBy(a)
>>> g.keys
array([4 1 4 3 2 2 2 3 3])
>>> b = ak.linspace(-5,5,9)
>>> b
array([-5 -3.75 -2.5 -1.25 0 1.25 2.5 3.75 5])
>>> g.median(b)
(array([1 2 3 4]), array([-3.75 1.25 3.75 -3.75]))
min(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the minimum of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find minima

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_minima (pdarray) – One minimum per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if min is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if min is not supported for the values dtype

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.min(b)
(array([2, 3, 4]), array([1, 1, 3]))
mode(values: groupable) Tuple[groupable, groupable][source]

Most common value in each group. If a group is multi-modal, return the modal value that occurs first.

Parameters:

values ((list of) pdarray-like) – The values from which to take the mode of each group

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result ((list of) pdarray-like) – The most common value of each group

most_common(values)[source]

(Deprecated) See GroupBy.mode().

nunique(values: groupable) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the number of unique values in each group.

Parameters:

values (pdarray, int64) – The values to group and find unique values

Returns:

  • unique_keys (groupable) – The unique keys, in grouped order

  • group_nunique (groupable) – Number of unique values per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the dtype(s) of values array(s) does/do not support the nunique method

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if nunique is not supported for the values dtype

Examples

>>> data = ak.array([3, 4, 3, 1, 1, 4, 3, 4, 1, 4])
>>> data
array([3, 4, 3, 1, 1, 4, 3, 4, 1, 4])
>>> labels = ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> labels
ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> g = ak.GroupBy(labels)
>>> g.keys
ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> g.nunique(data)
array([1,2,3,4]), array([2, 2, 3, 1])
#    Group (1,1,1) has values [3,4,3] -> there are 2 unique values 3&4
#    Group (2,2,2) has values [1,1,4] -> 2 unique values 1&4
#    Group (3,3,3) has values [3,4,1] -> 3 unique values
#    Group (4) has values [4] -> 1 unique value
prod(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the product of each group’s values.

Parameters:
  • values (pdarray) – The values to group and multiply

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_products (pdarray, float64) – One product per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if prod is not supported for the values dtype

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.prod(b)
(array([2, 3, 4]), array([12, 108.00000000000003, 8.9999999999999982]))
register(user_defined_name: str) GroupBy[source]

Register this GroupBy object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the GroupBy is to be registered under, this will be the root name for underlying components

Returns:

The same GroupBy which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different GroupBys with the same name.

Return type:

GroupBy

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the GroupBy with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

size() Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Count the number of elements in each group, i.e. the number of times each key appears. This counts the total number of rows (including NaN values).

Parameters:

none

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • counts (pdarray, int64) – The number of times each unique key appears

See also

count

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 2, 3, 1, 2, 4, 3, 4, 3, 4])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.size()
>>> keys
array([1, 2, 3, 4])
>>> counts
array([1, 2, 4, 3])
std(values: arkouda.pdarrayclass.pdarray, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the standard deviation of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find standard deviation

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_stds (pdarray, float64) – One std value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

The standard deviation is the square root of the average of the squared deviations from the mean, i.e., std = sqrt(mean((x - x.mean())**2)).

The average squared deviation is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of the infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with ddof=1, it will not be an unbiased estimate of the standard deviation per se.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.std(b)
(array([2 3 4]), array([1.5275252316519465 1.0954451150103321 0]))
sum(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and sum each group’s values.

Parameters:

values (pdarray) – The values to group and sum

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_sums (pdarray) – One sum per unique key in the GroupBy instance

  • skipna (bool) – boolean which determines if NANs should be skipped

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The grouped sum of a boolean pdarray returns integers.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.sum(b)
(array([2, 3, 4]), array([8, 14, 6]))
to_hdf(prefix_path, dataset='groupby', mode='truncate', file_type='distribute')[source]

Save the GroupBy to HDF5. The result is a collection of HDF5 files, one file per locale of the arkouda server, where each filename starts with prefix_path.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files will share

  • dataset (str) – Name prefix for saved data within the HDF5 file

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, add data as a new column to existing files.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Returns:

  • None

  • GroupBy is not currently supported by Parquet

unique(values: groupable)[source]

Return the set of unique values in each group, as a SegArray.

Parameters:

values ((list of) pdarray-like) – The values to unique

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result ((list of) SegArray) – The unique values of each group

Raises:

TypeError – Raised if values is or contains Strings or Categorical

unregister()[source]

Unregister this GroupBy object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

static unregister_groupby_by_name(user_defined_name: str) None[source]

Function to unregister GroupBy object by name which was registered with the arkouda server via register()

Parameters:

user_defined_name (str) – Name under which the GroupBy object was registered

Raises:
  • TypeError – if user_defined_name is not a string

  • RegistrationError – if there is an issue attempting to unregister any underlying components

update_hdf(prefix_path: str, dataset: str = 'groupby', repack: bool = True)[source]
var(values: arkouda.pdarrayclass.pdarray, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the variance of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find variance

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating var

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_vars (pdarray, float64) – One var value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

The variance is the average of the squared deviations from the mean, i.e., var = mean((x - x.mean())**2).

The mean is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.var(b)
(array([2 3 4]), array([2.333333333333333 1.2 0]))
class arkouda.GroupBy(keys: groupable | None = None, assume_sorted: bool = False, dropna: bool = True, **kwargs)[source]

Group an array or list of arrays by value, usually in preparation for aggregating the within-group values of another array.

Parameters:
  • keys ((list of) pdarray, Strings, or Categorical) – The array to group by value, or if list, the column arrays to group by row

  • assume_sorted (bool) – If True, assume keys is already sorted (Default: False)

nkeys

The number of key arrays (columns)

Type:

int

size[source]

The length of the input array(s), i.e. number of rows

Type:

int

permutation

The permutation that sorts the keys array(s) by value (row)

Type:

pdarray

unique_keys

The unique values of the keys array(s), in grouped order

Type:

(list of) pdarray, Strings, or Categorical

ngroups

The length of the unique_keys array(s), i.e. number of groups

Type:

int

segments

The start index of each group in the grouped array(s)

Type:

pdarray

logger

Used for all logging operations

Type:

ArkoudaLogger

dropna

If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Type:

bool (default=True)

Raises:

TypeError – Raised if keys is a pdarray with a dtype other than int64

Notes

Integral pdarrays, Strings, and Categoricals are natively supported, but float64 and bool arrays are not.

For a user-defined class to be groupable, it must inherit from pdarray and define or overload the grouping API:

  1. a ._get_grouping_keys() method that returns a list of pdarrays that can be (co)argsorted.

  2. (Optional) a .group() method that returns the permutation that groups the array

If the input is a single array with a .group() method defined, method 2 will be used; otherwise, method 1 will be used.

Reductions
objType = 'GroupBy'
AND(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise AND of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise AND reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with AND

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise AND of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

OR(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise OR of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise OR reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with OR

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise OR of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

XOR(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise XOR of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise XOR reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with XOR

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise XOR of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

aggregate(values: groupable, operator: str, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, groupable][source]

Using the permutation stored in the GroupBy instance, group another array of values and apply a reduction to each group’s values.

Parameters:
  • values (pdarray) – The values to group and reduce

  • operator (str) – The name of the reduction operator to use

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

  • unique_keys (groupable) – The unique keys, in grouped order

  • aggregates (groupable) – One aggregate value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if the requested operator is not supported for the values dtype

Examples

>>> keys = ak.arange(0, 10)
>>> vals = ak.linspace(-1, 1, 10)
>>> g = ak.GroupBy(keys)
>>> g.aggregate(vals, 'sum')
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([-1, -0.77777777777777768,
-0.55555555555555536, -0.33333333333333348, -0.11111111111111116,
0.11111111111111116, 0.33333333333333348, 0.55555555555555536, 0.77777777777777768,
1]))
>>> g.aggregate(vals, 'min')
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([-1, -0.77777777777777779,
-0.55555555555555558, -0.33333333333333337, -0.11111111111111116, 0.11111111111111116,
0.33333333333333326, 0.55555555555555536, 0.77777777777777768, 1]))
all(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and perform an “and” reduction on each group.

Parameters:

values (pdarray, bool) – The values to group and reduce with “and”

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_any (pdarray, bool) – One bool per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not bool

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

any(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and perform an “or” reduction on each group.

Parameters:

values (pdarray, bool) – The values to group and reduce with “or”

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_any (pdarray, bool) – One bool per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not bool

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

argmax(values: arkouda.pdarrayclass.pdarray) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the location of the first maximum of each group’s values.

Parameters:

values (pdarray) – The values to group and find argmax

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_argmaxima (pdarray, int64) – One index per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if argmax is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The returned indices refer to the original values array as passed in, not the permutation applied by the GroupBy instance.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.argmax(b)
(array([2, 3, 4]), array([9, 3, 2]))
argmin(values: arkouda.pdarrayclass.pdarray) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the location of the first minimum of each group’s values.

Parameters:

values (pdarray) – The values to group and find argmin

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_argminima (pdarray, int64) – One index per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if argmax is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if argmin is not supported for the values dtype

Notes

The returned indices refer to the original values array as passed in, not the permutation applied by the GroupBy instance.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.argmin(b)
(array([2, 3, 4]), array([5, 4, 2]))
static attach(user_defined_name: str) GroupBy[source]

Function to return a GroupBy object attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which GroupBy object was registered under

Returns:

The GroupBy object created by re-attaching to the corresponding server components

Return type:

GroupBy

Raises:

RegistrationError – if user_defined_name is not registered

broadcast(values: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings, permute: bool = True) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Fill each group’s segment with a constant value.

Parameters:
  • values (pdarray, Strings) – The values to put in each group’s segment

  • permute (bool) – If True (default), permute broadcast values back to the ordering of the original array on which GroupBy was called. If False, the broadcast values are grouped by value.

Returns:

The broadcasted values

Return type:

pdarray, Strings

Raises:
  • TypeError – Raised if value is not a pdarray object

  • ValueError – Raised if the values array does not have one value per segment

Notes

This function is a sparse analog of np.broadcast. If a GroupBy object represents a sparse matrix (tensor), then this function takes a (dense) column vector and replicates each value to the non-zero elements in the corresponding row.

Examples

>>> a = ak.array([0, 1, 0, 1, 0])
>>> values = ak.array([3, 5])
>>> g = ak.GroupBy(a)
# By default, result is in original order
>>> g.broadcast(values)
array([3, 5, 3, 5, 3])
# With permute=False, result is in grouped order
>>> g.broadcast(values, permute=False)
array([3, 3, 3, 5, 5]
>>> a = ak.randint(1,5,10)
>>> a
array([3, 1, 4, 4, 4, 1, 3, 3, 2, 2])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.count()
>>> g.broadcast(counts > 2)
array([True False True True True False True True False False])
>>> g.broadcast(counts == 3)
array([True False True True True False True True False False])
>>> g.broadcast(counts < 4)
array([True True True True True True True True True True])
static build_from_components(user_defined_name: str | None = None, **kwargs) GroupBy[source]

function to build a new GroupBy object from component keys and permutation.

Parameters:
  • user_defined_name (str (Optional) Passing a name will init the new GroupBy) – and assign it the given name

  • kwargs (dict Dictionary of components required for rebuilding the GroupBy.) – Expected keys are “orig_keys”, “permutation”, “unique_keys”, and “segments”

Returns:

The GroupBy object created by using the given components

Return type:

GroupBy

count() Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Count the number of elements in each group, i.e. the number of times each key appears. This counts the total number of rows (including NaN values).

Parameters:

none

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • counts (pdarray, int64) – The number of times each unique key appears

Notes

This alias is an alias of “size”.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 2, 3, 1, 2, 4, 3, 4, 3, 4])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.count()
>>> keys
array([1, 2, 3, 4])
>>> counts
array([1, 2, 4, 3])
first(values: groupable_element_type) Tuple[groupable, groupable_element_type][source]

First value in each group.

Parameters:

values (pdarray-like) – The values from which to take the first of each group

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result (pdarray-like) – The first value of each group

static from_return_msg(rep_msg)[source]
is_registered() bool[source]

Return True if the object is contained in the registry

Returns:

Indicates if the object is contained in the registry

Return type:

bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mismatch of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

max(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the maximum of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find maxima

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_maxima (pdarray) – One maximum per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if max is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if max is not supported for the values dtype

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.max(b)
(array([2, 3, 4]), array([4, 4, 3]))
mean(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the mean of each group’s values.

Parameters:
  • values (pdarray) – The values to group and average

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_means (pdarray, float64) – One mean value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.mean(b)
(array([2, 3, 4]), array([2.6666666666666665, 2.7999999999999998, 3]))
median(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the median of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find median

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_medians (pdarray, float64) – One median value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,9)
>>> a
array([4 1 4 3 2 2 2 3 3])
>>> g = ak.GroupBy(a)
>>> g.keys
array([4 1 4 3 2 2 2 3 3])
>>> b = ak.linspace(-5,5,9)
>>> b
array([-5 -3.75 -2.5 -1.25 0 1.25 2.5 3.75 5])
>>> g.median(b)
(array([1 2 3 4]), array([-3.75 1.25 3.75 -3.75]))
min(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the minimum of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find minima

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_minima (pdarray) – One minimum per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if min is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if min is not supported for the values dtype

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.min(b)
(array([2, 3, 4]), array([1, 1, 3]))
mode(values: groupable) Tuple[groupable, groupable][source]

Most common value in each group. If a group is multi-modal, return the modal value that occurs first.

Parameters:

values ((list of) pdarray-like) – The values from which to take the mode of each group

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result ((list of) pdarray-like) – The most common value of each group

most_common(values)[source]

(Deprecated) See GroupBy.mode().

nunique(values: groupable) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the number of unique values in each group.

Parameters:

values (pdarray, int64) – The values to group and find unique values

Returns:

  • unique_keys (groupable) – The unique keys, in grouped order

  • group_nunique (groupable) – Number of unique values per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the dtype(s) of values array(s) does/do not support the nunique method

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if nunique is not supported for the values dtype

Examples

>>> data = ak.array([3, 4, 3, 1, 1, 4, 3, 4, 1, 4])
>>> data
array([3, 4, 3, 1, 1, 4, 3, 4, 1, 4])
>>> labels = ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> labels
ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> g = ak.GroupBy(labels)
>>> g.keys
ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> g.nunique(data)
array([1,2,3,4]), array([2, 2, 3, 1])
#    Group (1,1,1) has values [3,4,3] -> there are 2 unique values 3&4
#    Group (2,2,2) has values [1,1,4] -> 2 unique values 1&4
#    Group (3,3,3) has values [3,4,1] -> 3 unique values
#    Group (4) has values [4] -> 1 unique value
prod(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the product of each group’s values.

Parameters:
  • values (pdarray) – The values to group and multiply

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_products (pdarray, float64) – One product per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if prod is not supported for the values dtype

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.prod(b)
(array([2, 3, 4]), array([12, 108.00000000000003, 8.9999999999999982]))
register(user_defined_name: str) GroupBy[source]

Register this GroupBy object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the GroupBy is to be registered under, this will be the root name for underlying components

Returns:

The same GroupBy which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different GroupBys with the same name.

Return type:

GroupBy

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the GroupBy with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

size() Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Count the number of elements in each group, i.e. the number of times each key appears. This counts the total number of rows (including NaN values).

Parameters:

none

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • counts (pdarray, int64) – The number of times each unique key appears

See also

count

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 2, 3, 1, 2, 4, 3, 4, 3, 4])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.size()
>>> keys
array([1, 2, 3, 4])
>>> counts
array([1, 2, 4, 3])
std(values: arkouda.pdarrayclass.pdarray, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the standard deviation of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find standard deviation

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_stds (pdarray, float64) – One std value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

The standard deviation is the square root of the average of the squared deviations from the mean, i.e., std = sqrt(mean((x - x.mean())**2)).

The average squared deviation is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of the infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with ddof=1, it will not be an unbiased estimate of the standard deviation per se.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.std(b)
(array([2 3 4]), array([1.5275252316519465 1.0954451150103321 0]))
sum(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and sum each group’s values.

Parameters:

values (pdarray) – The values to group and sum

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_sums (pdarray) – One sum per unique key in the GroupBy instance

  • skipna (bool) – boolean which determines if NANs should be skipped

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The grouped sum of a boolean pdarray returns integers.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.sum(b)
(array([2, 3, 4]), array([8, 14, 6]))
to_hdf(prefix_path, dataset='groupby', mode='truncate', file_type='distribute')[source]

Save the GroupBy to HDF5. The result is a collection of HDF5 files, one file per locale of the arkouda server, where each filename starts with prefix_path.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files will share

  • dataset (str) – Name prefix for saved data within the HDF5 file

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, add data as a new column to existing files.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Returns:

  • None

  • GroupBy is not currently supported by Parquet

unique(values: groupable)[source]

Return the set of unique values in each group, as a SegArray.

Parameters:

values ((list of) pdarray-like) – The values to unique

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result ((list of) SegArray) – The unique values of each group

Raises:

TypeError – Raised if values is or contains Strings or Categorical

unregister()[source]

Unregister this GroupBy object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

static unregister_groupby_by_name(user_defined_name: str) None[source]

Function to unregister GroupBy object by name which was registered with the arkouda server via register()

Parameters:

user_defined_name (str) – Name under which the GroupBy object was registered

Raises:
  • TypeError – if user_defined_name is not a string

  • RegistrationError – if there is an issue attempting to unregister any underlying components

update_hdf(prefix_path: str, dataset: str = 'groupby', repack: bool = True)[source]
var(values: arkouda.pdarrayclass.pdarray, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the variance of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find variance

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating var

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_vars (pdarray, float64) – One var value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

The variance is the average of the squared deviations from the mean, i.e., var = mean((x - x.mean())**2).

The mean is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.var(b)
(array([2 3 4]), array([2.333333333333333 1.2 0]))
class arkouda.GroupBy(keys: groupable | None = None, assume_sorted: bool = False, dropna: bool = True, **kwargs)[source]

Group an array or list of arrays by value, usually in preparation for aggregating the within-group values of another array.

Parameters:
  • keys ((list of) pdarray, Strings, or Categorical) – The array to group by value, or if list, the column arrays to group by row

  • assume_sorted (bool) – If True, assume keys is already sorted (Default: False)

nkeys

The number of key arrays (columns)

Type:

int

size[source]

The length of the input array(s), i.e. number of rows

Type:

int

permutation

The permutation that sorts the keys array(s) by value (row)

Type:

pdarray

unique_keys

The unique values of the keys array(s), in grouped order

Type:

(list of) pdarray, Strings, or Categorical

ngroups

The length of the unique_keys array(s), i.e. number of groups

Type:

int

segments

The start index of each group in the grouped array(s)

Type:

pdarray

logger

Used for all logging operations

Type:

ArkoudaLogger

dropna

If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Type:

bool (default=True)

Raises:

TypeError – Raised if keys is a pdarray with a dtype other than int64

Notes

Integral pdarrays, Strings, and Categoricals are natively supported, but float64 and bool arrays are not.

For a user-defined class to be groupable, it must inherit from pdarray and define or overload the grouping API:

  1. a ._get_grouping_keys() method that returns a list of pdarrays that can be (co)argsorted.

  2. (Optional) a .group() method that returns the permutation that groups the array

If the input is a single array with a .group() method defined, method 2 will be used; otherwise, method 1 will be used.

Reductions
objType = 'GroupBy'
AND(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise AND of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise AND reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with AND

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise AND of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

OR(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise OR of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise OR reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with OR

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise OR of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

XOR(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise XOR of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise XOR reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with XOR

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise XOR of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

aggregate(values: groupable, operator: str, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, groupable][source]

Using the permutation stored in the GroupBy instance, group another array of values and apply a reduction to each group’s values.

Parameters:
  • values (pdarray) – The values to group and reduce

  • operator (str) – The name of the reduction operator to use

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

  • unique_keys (groupable) – The unique keys, in grouped order

  • aggregates (groupable) – One aggregate value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if the requested operator is not supported for the values dtype

Examples

>>> keys = ak.arange(0, 10)
>>> vals = ak.linspace(-1, 1, 10)
>>> g = ak.GroupBy(keys)
>>> g.aggregate(vals, 'sum')
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([-1, -0.77777777777777768,
-0.55555555555555536, -0.33333333333333348, -0.11111111111111116,
0.11111111111111116, 0.33333333333333348, 0.55555555555555536, 0.77777777777777768,
1]))
>>> g.aggregate(vals, 'min')
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([-1, -0.77777777777777779,
-0.55555555555555558, -0.33333333333333337, -0.11111111111111116, 0.11111111111111116,
0.33333333333333326, 0.55555555555555536, 0.77777777777777768, 1]))
all(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and perform an “and” reduction on each group.

Parameters:

values (pdarray, bool) – The values to group and reduce with “and”

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_any (pdarray, bool) – One bool per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not bool

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

any(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and perform an “or” reduction on each group.

Parameters:

values (pdarray, bool) – The values to group and reduce with “or”

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_any (pdarray, bool) – One bool per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not bool

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

argmax(values: arkouda.pdarrayclass.pdarray) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the location of the first maximum of each group’s values.

Parameters:

values (pdarray) – The values to group and find argmax

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_argmaxima (pdarray, int64) – One index per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if argmax is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The returned indices refer to the original values array as passed in, not the permutation applied by the GroupBy instance.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.argmax(b)
(array([2, 3, 4]), array([9, 3, 2]))
argmin(values: arkouda.pdarrayclass.pdarray) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the location of the first minimum of each group’s values.

Parameters:

values (pdarray) – The values to group and find argmin

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_argminima (pdarray, int64) – One index per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if argmax is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if argmin is not supported for the values dtype

Notes

The returned indices refer to the original values array as passed in, not the permutation applied by the GroupBy instance.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.argmin(b)
(array([2, 3, 4]), array([5, 4, 2]))
static attach(user_defined_name: str) GroupBy[source]

Function to return a GroupBy object attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which GroupBy object was registered under

Returns:

The GroupBy object created by re-attaching to the corresponding server components

Return type:

GroupBy

Raises:

RegistrationError – if user_defined_name is not registered

broadcast(values: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings, permute: bool = True) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Fill each group’s segment with a constant value.

Parameters:
  • values (pdarray, Strings) – The values to put in each group’s segment

  • permute (bool) – If True (default), permute broadcast values back to the ordering of the original array on which GroupBy was called. If False, the broadcast values are grouped by value.

Returns:

The broadcasted values

Return type:

pdarray, Strings

Raises:
  • TypeError – Raised if value is not a pdarray object

  • ValueError – Raised if the values array does not have one value per segment

Notes

This function is a sparse analog of np.broadcast. If a GroupBy object represents a sparse matrix (tensor), then this function takes a (dense) column vector and replicates each value to the non-zero elements in the corresponding row.

Examples

>>> a = ak.array([0, 1, 0, 1, 0])
>>> values = ak.array([3, 5])
>>> g = ak.GroupBy(a)
# By default, result is in original order
>>> g.broadcast(values)
array([3, 5, 3, 5, 3])
# With permute=False, result is in grouped order
>>> g.broadcast(values, permute=False)
array([3, 3, 3, 5, 5]
>>> a = ak.randint(1,5,10)
>>> a
array([3, 1, 4, 4, 4, 1, 3, 3, 2, 2])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.count()
>>> g.broadcast(counts > 2)
array([True False True True True False True True False False])
>>> g.broadcast(counts == 3)
array([True False True True True False True True False False])
>>> g.broadcast(counts < 4)
array([True True True True True True True True True True])
static build_from_components(user_defined_name: str | None = None, **kwargs) GroupBy[source]

function to build a new GroupBy object from component keys and permutation.

Parameters:
  • user_defined_name (str (Optional) Passing a name will init the new GroupBy) – and assign it the given name

  • kwargs (dict Dictionary of components required for rebuilding the GroupBy.) – Expected keys are “orig_keys”, “permutation”, “unique_keys”, and “segments”

Returns:

The GroupBy object created by using the given components

Return type:

GroupBy

count() Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Count the number of elements in each group, i.e. the number of times each key appears. This counts the total number of rows (including NaN values).

Parameters:

none

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • counts (pdarray, int64) – The number of times each unique key appears

Notes

This alias is an alias of “size”.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 2, 3, 1, 2, 4, 3, 4, 3, 4])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.count()
>>> keys
array([1, 2, 3, 4])
>>> counts
array([1, 2, 4, 3])
first(values: groupable_element_type) Tuple[groupable, groupable_element_type][source]

First value in each group.

Parameters:

values (pdarray-like) – The values from which to take the first of each group

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result (pdarray-like) – The first value of each group

static from_return_msg(rep_msg)[source]
is_registered() bool[source]

Return True if the object is contained in the registry

Returns:

Indicates if the object is contained in the registry

Return type:

bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mismatch of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

max(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the maximum of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find maxima

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_maxima (pdarray) – One maximum per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if max is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if max is not supported for the values dtype

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.max(b)
(array([2, 3, 4]), array([4, 4, 3]))
mean(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the mean of each group’s values.

Parameters:
  • values (pdarray) – The values to group and average

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_means (pdarray, float64) – One mean value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.mean(b)
(array([2, 3, 4]), array([2.6666666666666665, 2.7999999999999998, 3]))
median(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the median of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find median

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_medians (pdarray, float64) – One median value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,9)
>>> a
array([4 1 4 3 2 2 2 3 3])
>>> g = ak.GroupBy(a)
>>> g.keys
array([4 1 4 3 2 2 2 3 3])
>>> b = ak.linspace(-5,5,9)
>>> b
array([-5 -3.75 -2.5 -1.25 0 1.25 2.5 3.75 5])
>>> g.median(b)
(array([1 2 3 4]), array([-3.75 1.25 3.75 -3.75]))
min(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the minimum of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find minima

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_minima (pdarray) – One minimum per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if min is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if min is not supported for the values dtype

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.min(b)
(array([2, 3, 4]), array([1, 1, 3]))
mode(values: groupable) Tuple[groupable, groupable][source]

Most common value in each group. If a group is multi-modal, return the modal value that occurs first.

Parameters:

values ((list of) pdarray-like) – The values from which to take the mode of each group

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result ((list of) pdarray-like) – The most common value of each group

most_common(values)[source]

(Deprecated) See GroupBy.mode().

nunique(values: groupable) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the number of unique values in each group.

Parameters:

values (pdarray, int64) – The values to group and find unique values

Returns:

  • unique_keys (groupable) – The unique keys, in grouped order

  • group_nunique (groupable) – Number of unique values per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the dtype(s) of values array(s) does/do not support the nunique method

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if nunique is not supported for the values dtype

Examples

>>> data = ak.array([3, 4, 3, 1, 1, 4, 3, 4, 1, 4])
>>> data
array([3, 4, 3, 1, 1, 4, 3, 4, 1, 4])
>>> labels = ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> labels
ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> g = ak.GroupBy(labels)
>>> g.keys
ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> g.nunique(data)
array([1,2,3,4]), array([2, 2, 3, 1])
#    Group (1,1,1) has values [3,4,3] -> there are 2 unique values 3&4
#    Group (2,2,2) has values [1,1,4] -> 2 unique values 1&4
#    Group (3,3,3) has values [3,4,1] -> 3 unique values
#    Group (4) has values [4] -> 1 unique value
prod(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the product of each group’s values.

Parameters:
  • values (pdarray) – The values to group and multiply

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_products (pdarray, float64) – One product per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if prod is not supported for the values dtype

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.prod(b)
(array([2, 3, 4]), array([12, 108.00000000000003, 8.9999999999999982]))
register(user_defined_name: str) GroupBy[source]

Register this GroupBy object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the GroupBy is to be registered under, this will be the root name for underlying components

Returns:

The same GroupBy which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different GroupBys with the same name.

Return type:

GroupBy

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the GroupBy with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

size() Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Count the number of elements in each group, i.e. the number of times each key appears. This counts the total number of rows (including NaN values).

Parameters:

none

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • counts (pdarray, int64) – The number of times each unique key appears

See also

count

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 2, 3, 1, 2, 4, 3, 4, 3, 4])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.size()
>>> keys
array([1, 2, 3, 4])
>>> counts
array([1, 2, 4, 3])
std(values: arkouda.pdarrayclass.pdarray, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the standard deviation of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find standard deviation

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_stds (pdarray, float64) – One std value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

The standard deviation is the square root of the average of the squared deviations from the mean, i.e., std = sqrt(mean((x - x.mean())**2)).

The average squared deviation is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of the infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with ddof=1, it will not be an unbiased estimate of the standard deviation per se.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.std(b)
(array([2 3 4]), array([1.5275252316519465 1.0954451150103321 0]))
sum(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and sum each group’s values.

Parameters:

values (pdarray) – The values to group and sum

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_sums (pdarray) – One sum per unique key in the GroupBy instance

  • skipna (bool) – boolean which determines if NANs should be skipped

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The grouped sum of a boolean pdarray returns integers.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.sum(b)
(array([2, 3, 4]), array([8, 14, 6]))
to_hdf(prefix_path, dataset='groupby', mode='truncate', file_type='distribute')[source]

Save the GroupBy to HDF5. The result is a collection of HDF5 files, one file per locale of the arkouda server, where each filename starts with prefix_path.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files will share

  • dataset (str) – Name prefix for saved data within the HDF5 file

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, add data as a new column to existing files.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Returns:

  • None

  • GroupBy is not currently supported by Parquet

unique(values: groupable)[source]

Return the set of unique values in each group, as a SegArray.

Parameters:

values ((list of) pdarray-like) – The values to unique

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result ((list of) SegArray) – The unique values of each group

Raises:

TypeError – Raised if values is or contains Strings or Categorical

unregister()[source]

Unregister this GroupBy object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

static unregister_groupby_by_name(user_defined_name: str) None[source]

Function to unregister GroupBy object by name which was registered with the arkouda server via register()

Parameters:

user_defined_name (str) – Name under which the GroupBy object was registered

Raises:
  • TypeError – if user_defined_name is not a string

  • RegistrationError – if there is an issue attempting to unregister any underlying components

update_hdf(prefix_path: str, dataset: str = 'groupby', repack: bool = True)[source]
var(values: arkouda.pdarrayclass.pdarray, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the variance of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find variance

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating var

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_vars (pdarray, float64) – One var value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

The variance is the average of the squared deviations from the mean, i.e., var = mean((x - x.mean())**2).

The mean is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.var(b)
(array([2 3 4]), array([2.333333333333333 1.2 0]))
class arkouda.GroupBy(keys: groupable | None = None, assume_sorted: bool = False, dropna: bool = True, **kwargs)[source]

Group an array or list of arrays by value, usually in preparation for aggregating the within-group values of another array.

Parameters:
  • keys ((list of) pdarray, Strings, or Categorical) – The array to group by value, or if list, the column arrays to group by row

  • assume_sorted (bool) – If True, assume keys is already sorted (Default: False)

nkeys

The number of key arrays (columns)

Type:

int

size[source]

The length of the input array(s), i.e. number of rows

Type:

int

permutation

The permutation that sorts the keys array(s) by value (row)

Type:

pdarray

unique_keys

The unique values of the keys array(s), in grouped order

Type:

(list of) pdarray, Strings, or Categorical

ngroups

The length of the unique_keys array(s), i.e. number of groups

Type:

int

segments

The start index of each group in the grouped array(s)

Type:

pdarray

logger

Used for all logging operations

Type:

ArkoudaLogger

dropna

If True, and the groupby keys contain NaN values, the NaN values together with the corresponding row will be dropped. Otherwise, the rows corresponding to NaN values will be kept.

Type:

bool (default=True)

Raises:

TypeError – Raised if keys is a pdarray with a dtype other than int64

Notes

Integral pdarrays, Strings, and Categoricals are natively supported, but float64 and bool arrays are not.

For a user-defined class to be groupable, it must inherit from pdarray and define or overload the grouping API:

  1. a ._get_grouping_keys() method that returns a list of pdarrays that can be (co)argsorted.

  2. (Optional) a .group() method that returns the permutation that groups the array

If the input is a single array with a .group() method defined, method 2 will be used; otherwise, method 1 will be used.

Reductions
objType = 'GroupBy'
AND(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise AND of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise AND reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with AND

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise AND of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

OR(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise OR of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise OR reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with OR

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise OR of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

XOR(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Bitwise XOR of values in each segment.

Using the permutation stored in the GroupBy instance, group another array of values and perform a bitwise XOR reduction on each group.

Parameters:

values (pdarray, int64) – The values to group and reduce with XOR

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • result (pdarray, int64) – Bitwise XOR of values in segments corresponding to keys

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not int64

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

aggregate(values: groupable, operator: str, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, groupable][source]

Using the permutation stored in the GroupBy instance, group another array of values and apply a reduction to each group’s values.

Parameters:
  • values (pdarray) – The values to group and reduce

  • operator (str) – The name of the reduction operator to use

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

  • unique_keys (groupable) – The unique keys, in grouped order

  • aggregates (groupable) – One aggregate value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if the requested operator is not supported for the values dtype

Examples

>>> keys = ak.arange(0, 10)
>>> vals = ak.linspace(-1, 1, 10)
>>> g = ak.GroupBy(keys)
>>> g.aggregate(vals, 'sum')
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([-1, -0.77777777777777768,
-0.55555555555555536, -0.33333333333333348, -0.11111111111111116,
0.11111111111111116, 0.33333333333333348, 0.55555555555555536, 0.77777777777777768,
1]))
>>> g.aggregate(vals, 'min')
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([-1, -0.77777777777777779,
-0.55555555555555558, -0.33333333333333337, -0.11111111111111116, 0.11111111111111116,
0.33333333333333326, 0.55555555555555536, 0.77777777777777768, 1]))
all(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and perform an “and” reduction on each group.

Parameters:

values (pdarray, bool) – The values to group and reduce with “and”

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_any (pdarray, bool) – One bool per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not bool

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if all is not supported for the values dtype

any(values: arkouda.pdarrayclass.pdarray) Tuple[arkouda.pdarrayclass.pdarray | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and perform an “or” reduction on each group.

Parameters:

values (pdarray, bool) – The values to group and reduce with “or”

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_any (pdarray, bool) – One bool per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray or if the pdarray dtype is not bool

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

argmax(values: arkouda.pdarrayclass.pdarray) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the location of the first maximum of each group’s values.

Parameters:

values (pdarray) – The values to group and find argmax

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_argmaxima (pdarray, int64) – One index per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if argmax is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The returned indices refer to the original values array as passed in, not the permutation applied by the GroupBy instance.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.argmax(b)
(array([2, 3, 4]), array([9, 3, 2]))
argmin(values: arkouda.pdarrayclass.pdarray) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the location of the first minimum of each group’s values.

Parameters:

values (pdarray) – The values to group and find argmin

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_argminima (pdarray, int64) – One index per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if argmax is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if argmin is not supported for the values dtype

Notes

The returned indices refer to the original values array as passed in, not the permutation applied by the GroupBy instance.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.argmin(b)
(array([2, 3, 4]), array([5, 4, 2]))
static attach(user_defined_name: str) GroupBy[source]

Function to return a GroupBy object attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which GroupBy object was registered under

Returns:

The GroupBy object created by re-attaching to the corresponding server components

Return type:

GroupBy

Raises:

RegistrationError – if user_defined_name is not registered

broadcast(values: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings, permute: bool = True) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Fill each group’s segment with a constant value.

Parameters:
  • values (pdarray, Strings) – The values to put in each group’s segment

  • permute (bool) – If True (default), permute broadcast values back to the ordering of the original array on which GroupBy was called. If False, the broadcast values are grouped by value.

Returns:

The broadcasted values

Return type:

pdarray, Strings

Raises:
  • TypeError – Raised if value is not a pdarray object

  • ValueError – Raised if the values array does not have one value per segment

Notes

This function is a sparse analog of np.broadcast. If a GroupBy object represents a sparse matrix (tensor), then this function takes a (dense) column vector and replicates each value to the non-zero elements in the corresponding row.

Examples

>>> a = ak.array([0, 1, 0, 1, 0])
>>> values = ak.array([3, 5])
>>> g = ak.GroupBy(a)
# By default, result is in original order
>>> g.broadcast(values)
array([3, 5, 3, 5, 3])
# With permute=False, result is in grouped order
>>> g.broadcast(values, permute=False)
array([3, 3, 3, 5, 5]
>>> a = ak.randint(1,5,10)
>>> a
array([3, 1, 4, 4, 4, 1, 3, 3, 2, 2])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.count()
>>> g.broadcast(counts > 2)
array([True False True True True False True True False False])
>>> g.broadcast(counts == 3)
array([True False True True True False True True False False])
>>> g.broadcast(counts < 4)
array([True True True True True True True True True True])
static build_from_components(user_defined_name: str | None = None, **kwargs) GroupBy[source]

function to build a new GroupBy object from component keys and permutation.

Parameters:
  • user_defined_name (str (Optional) Passing a name will init the new GroupBy) – and assign it the given name

  • kwargs (dict Dictionary of components required for rebuilding the GroupBy.) – Expected keys are “orig_keys”, “permutation”, “unique_keys”, and “segments”

Returns:

The GroupBy object created by using the given components

Return type:

GroupBy

count() Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Count the number of elements in each group, i.e. the number of times each key appears. This counts the total number of rows (including NaN values).

Parameters:

none

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • counts (pdarray, int64) – The number of times each unique key appears

Notes

This alias is an alias of “size”.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 2, 3, 1, 2, 4, 3, 4, 3, 4])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.count()
>>> keys
array([1, 2, 3, 4])
>>> counts
array([1, 2, 4, 3])
first(values: groupable_element_type) Tuple[groupable, groupable_element_type][source]

First value in each group.

Parameters:

values (pdarray-like) – The values from which to take the first of each group

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result (pdarray-like) – The first value of each group

static from_return_msg(rep_msg)[source]
is_registered() bool[source]

Return True if the object is contained in the registry

Returns:

Indicates if the object is contained in the registry

Return type:

bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mismatch of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

max(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the maximum of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find maxima

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_maxima (pdarray) – One maximum per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if max is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if max is not supported for the values dtype

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.max(b)
(array([2, 3, 4]), array([4, 4, 3]))
mean(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the mean of each group’s values.

Parameters:
  • values (pdarray) – The values to group and average

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_means (pdarray, float64) – One mean value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.mean(b)
(array([2, 3, 4]), array([2.6666666666666665, 2.7999999999999998, 3]))
median(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the median of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find median

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_medians (pdarray, float64) – One median value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,9)
>>> a
array([4 1 4 3 2 2 2 3 3])
>>> g = ak.GroupBy(a)
>>> g.keys
array([4 1 4 3 2 2 2 3 3])
>>> b = ak.linspace(-5,5,9)
>>> b
array([-5 -3.75 -2.5 -1.25 0 1.25 2.5 3.75 5])
>>> g.median(b)
(array([1 2 3 4]), array([-3.75 1.25 3.75 -3.75]))
min(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the minimum of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find minima

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_minima (pdarray) – One minimum per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object or if min is not supported for the values dtype

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if min is not supported for the values dtype

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.min(b)
(array([2, 3, 4]), array([1, 1, 3]))
mode(values: groupable) Tuple[groupable, groupable][source]

Most common value in each group. If a group is multi-modal, return the modal value that occurs first.

Parameters:

values ((list of) pdarray-like) – The values from which to take the mode of each group

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result ((list of) pdarray-like) – The most common value of each group

most_common(values)[source]

(Deprecated) See GroupBy.mode().

nunique(values: groupable) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and return the number of unique values in each group.

Parameters:

values (pdarray, int64) – The values to group and find unique values

Returns:

  • unique_keys (groupable) – The unique keys, in grouped order

  • group_nunique (groupable) – Number of unique values per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the dtype(s) of values array(s) does/do not support the nunique method

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if nunique is not supported for the values dtype

Examples

>>> data = ak.array([3, 4, 3, 1, 1, 4, 3, 4, 1, 4])
>>> data
array([3, 4, 3, 1, 1, 4, 3, 4, 1, 4])
>>> labels = ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> labels
ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> g = ak.GroupBy(labels)
>>> g.keys
ak.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4])
>>> g.nunique(data)
array([1,2,3,4]), array([2, 2, 3, 1])
#    Group (1,1,1) has values [3,4,3] -> there are 2 unique values 3&4
#    Group (2,2,2) has values [1,1,4] -> 2 unique values 1&4
#    Group (3,3,3) has values [3,4,1] -> 3 unique values
#    Group (4) has values [4] -> 1 unique value
prod(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the product of each group’s values.

Parameters:
  • values (pdarray) – The values to group and multiply

  • skipna (bool) – boolean which determines if NANs should be skipped

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_products (pdarray, float64) – One product per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

  • RuntimeError – Raised if prod is not supported for the values dtype

Notes

The return dtype is always float64.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.prod(b)
(array([2, 3, 4]), array([12, 108.00000000000003, 8.9999999999999982]))
register(user_defined_name: str) GroupBy[source]

Register this GroupBy object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the GroupBy is to be registered under, this will be the root name for underlying components

Returns:

The same GroupBy which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different GroupBys with the same name.

Return type:

GroupBy

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the GroupBy with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

size() Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Count the number of elements in each group, i.e. the number of times each key appears. This counts the total number of rows (including NaN values).

Parameters:

none

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • counts (pdarray, int64) – The number of times each unique key appears

See also

count

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 2, 3, 1, 2, 4, 3, 4, 3, 4])
>>> g = ak.GroupBy(a)
>>> keys,counts = g.size()
>>> keys
array([1, 2, 3, 4])
>>> counts
array([1, 2, 4, 3])
std(values: arkouda.pdarrayclass.pdarray, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the standard deviation of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find standard deviation

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_stds (pdarray, float64) – One std value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

The standard deviation is the square root of the average of the squared deviations from the mean, i.e., std = sqrt(mean((x - x.mean())**2)).

The average squared deviation is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of the infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with ddof=1, it will not be an unbiased estimate of the standard deviation per se.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.std(b)
(array([2 3 4]), array([1.5275252316519465 1.0954451150103321 0]))
sum(values: arkouda.pdarrayclass.pdarray, skipna: bool = True) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and sum each group’s values.

Parameters:

values (pdarray) – The values to group and sum

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_sums (pdarray) – One sum per unique key in the GroupBy instance

  • skipna (bool) – boolean which determines if NANs should be skipped

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The grouped sum of a boolean pdarray returns integers.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.sum(b)
(array([2, 3, 4]), array([8, 14, 6]))
to_hdf(prefix_path, dataset='groupby', mode='truncate', file_type='distribute')[source]

Save the GroupBy to HDF5. The result is a collection of HDF5 files, one file per locale of the arkouda server, where each filename starts with prefix_path.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files will share

  • dataset (str) – Name prefix for saved data within the HDF5 file

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, add data as a new column to existing files.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Returns:

  • None

  • GroupBy is not currently supported by Parquet

unique(values: groupable)[source]

Return the set of unique values in each group, as a SegArray.

Parameters:

values ((list of) pdarray-like) – The values to unique

Returns:

  • unique_keys ((list of) pdarray-like) – The unique keys, in grouped order

  • result ((list of) SegArray) – The unique values of each group

Raises:

TypeError – Raised if values is or contains Strings or Categorical

unregister()[source]

Unregister this GroupBy object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

static unregister_groupby_by_name(user_defined_name: str) None[source]

Function to unregister GroupBy object by name which was registered with the arkouda server via register()

Parameters:

user_defined_name (str) – Name under which the GroupBy object was registered

Raises:
  • TypeError – if user_defined_name is not a string

  • RegistrationError – if there is an issue attempting to unregister any underlying components

update_hdf(prefix_path: str, dataset: str = 'groupby', repack: bool = True)[source]
var(values: arkouda.pdarrayclass.pdarray, skipna: bool = True, ddof: arkouda.dtypes.int_scalars = 1) Tuple[groupable, arkouda.pdarrayclass.pdarray][source]

Using the permutation stored in the GroupBy instance, group another array of values and compute the variance of each group’s values.

Parameters:
  • values (pdarray) – The values to group and find variance

  • skipna (bool) – boolean which determines if NANs should be skipped

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating var

Returns:

  • unique_keys ((list of) pdarray or Strings) – The unique keys, in grouped order

  • group_vars (pdarray, float64) – One var value per unique key in the GroupBy instance

Raises:
  • TypeError – Raised if the values array is not a pdarray object

  • ValueError – Raised if the key array size does not match the values size or if the operator is not in the GroupBy.Reductions array

Notes

The return dtype is always float64.

The variance is the average of the squared deviations from the mean, i.e., var = mean((x - x.mean())**2).

The mean is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables.

Examples

>>> a = ak.randint(1,5,10)
>>> a
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> g = ak.GroupBy(a)
>>> g.keys
array([3, 3, 4, 3, 3, 2, 3, 2, 4, 2])
>>> b = ak.randint(1,5,10)
>>> b
array([3, 3, 3, 4, 1, 1, 3, 3, 3, 4])
>>> g.var(b)
(array([2 3 4]), array([2.333333333333333 1.2 0]))
class arkouda.IPv4(values)[source]

Bases: arkouda.pdarrayclass.pdarray

Represent integers as IPv4 addresses.

Parameters:

values (pdarray, int64) – The integer IP addresses

Returns:

The same IP addresses

Return type:

IPv4

Notes

This class is a thin wrapper around pdarray that mostly affects how values are displayed to the user. Operators and methods will typically treat this class like an int64 pdarray.

special_objType = 'IPv4'
export_uint()[source]
format(x)[source]

Format a single integer IP address as a string.

normalize(x)[source]

Take in an IP address as a string, integer, or IPAddress object, and convert it to an integer.

opeq(other, op)[source]
register(user_defined_name)[source]

Register this IPv4 object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the IPv4 is to be registered under, this will be the root name for underlying components

Returns:

The same IPv4 which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different IPv4s with the same name.

Return type:

IPv4

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the IPv4 with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

to_hdf(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', file_type: str = 'distribute')[source]

Override of the pdarray to_hdf to store the special object type

to_list()[source]

Export array as a list of integers.

to_ndarray()[source]

Export array as a numpy array of integers.

update_hdf(prefix_path: str, dataset: str = 'array', repack: bool = True)[source]

Override the pdarray implementation so that the special object type will be used.

class arkouda.Index(values: List | arkouda.pdarrayclass.pdarray | arkouda.Strings | arkouda.Categorical | pandas.Index | Index, name: str | None = None, allow_list=False, max_list_size=1000)[source]
property index

This is maintained to support older code

property is_unique

Property indicating if all values in the index are unique :rtype: bool - True if all values are unique, False otherwise.

property shape
objType = 'Index'

Sequence used for indexing and alignment.

The basic object storing axis labels for all DataFrame objects.

Parameters:
  • values (List, pdarray, Strings, Categorical, pandas.Index, or Index)

  • name (str, default=None) – Name to be stored in the index.

  • False (allow_list =) – If False, list values will be converted to a pdarray. If True, list values will remain as a list, provided the data length is less than max_list_size.

:paramIf False, list values will be converted to a pdarray.

If True, list values will remain as a list, provided the data length is less than max_list_size.

Parameters:

1000 (max_list_size =) – This is the maximum allowed data length for the values to be stored as a list object.

Raises:

ValueError – Raised if allow_list=True and the size of values is > max_list_size.

See also

MultiIndex

Examples

>>> ak.Index([1, 2, 3])
Index(array([1 2 3]), dtype='int64')
>>> ak.Index(list('abc'))
Index(array(['a', 'b', 'c']), dtype='<U0')
>>> ak.Index([1, 2, 3], allow_list=True)
Index([1, 2, 3], dtype='int64')
argsort(ascending=True)[source]
concat(other)[source]
static factory(index)[source]
classmethod from_return_msg(rep_msg)[source]
is_registered()[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

lookup(key)[source]
map(arg: dict | arkouda.series.Series) Index[source]

Map values of Index according to an input mapping.

Parameters:

arg (dict or Series) – The mapping correspondence.

Returns:

A new index with the values transformed by the mapping correspondence.

Return type:

arkouda.index.Index

Raises:

TypeError – Raised if arg is not of type dict or arkouda.Series. Raised if index values not of type pdarray, Categorical, or Strings.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> idx = ak.Index(ak.array([2, 3, 2, 3, 4]))
>>> display(idx)
Index(array([2 3 2 3 4]), dtype='int64')
>>> idx.map({4: 25.0, 2: 30.0, 1: 7.0, 3: 5.0})
Index(array([30.00000000000000000 5.00000000000000000 30.00000000000000000
5.00000000000000000 25.00000000000000000]), dtype='float64')
>>> s2 = ak.Series(ak.array(["a","b","c","d"]), index = ak.array([4,2,1,3]))
>>> idx.map(s2)
Index(array(['b', 'b', 'd', 'd', 'a']), dtype='<U0')
memory_usage(unit='B')[source]

Return the memory usage of the Index values.

Parameters:

unit (str, default = "B") – Unit to return. One of {‘B’, ‘KB’, ‘MB’, ‘GB’}.

Returns:

Bytes of memory consumed.

Return type:

int

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> idx = Index(ak.array([1, 2, 3]))
>>> idx.memory_usage()
24
register(user_defined_name)[source]

Register this Index object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the Index is to be registered under, this will be the root name for underlying components

Returns:

The same Index which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Indexes with the same name.

Return type:

Index

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the Index with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

save(prefix_path: str, dataset: str = 'index', mode: str = 'truncate', compression: str | None = None, file_format: str = 'HDF5', file_type: str = 'distribute') str[source]

DEPRECATED Save the index to HDF5 or Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:
  • compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

  • file_format (str {'HDF5', 'Parquet'}) – By default, saved files will be written to the HDF5 file format. If ‘Parquet’, the files will be written to the Parquet file format. This is case insensitive.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:
  • RuntimeError – Raised if a server-side error is thrown saving the pdarray

  • ValueError – Raised if there is an error in parsing the prefix path pointing to file write location or if the mode parameter is neither truncate nor append

  • TypeError – Raised if any one of the prefix_path, dataset, or mode parameters is not a string. Raised if the Index values are a list.

Notes

The prefix_path must be visible to the arkouda server and the user must have write permission. Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales. If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. Previously all files saved in Parquet format were saved with a .parquet file extension. This will require you to use load as if you saved the file with the extension. Try this if an older file is not being found. Any file extension can be used. The file I/O does not rely on the extension to determine the file format.

set_dtype(dtype)[source]

Change the data type of the index

Currently only aku.ip_address and ak.array are supported.

to_csv(prefix_path: str, dataset: str = 'index', col_delim: str = ',', overwrite: bool = False)[source]

Write Index to CSV file(s). File will contain a single column with the pdarray data. All CSV Files written by Arkouda include a header denoting data types of the columns.

prefix_path: str

The filename prefix to be used for saving files. Files will have _LOCALE#### appended when they are written to disk.

dataset: str

Column name to save the pdarray under. Defaults to “array”.

col_delim: str

Defaults to “,”. Value to be used to separate columns within the file. Please be sure that the value used DOES NOT appear in your dataset.

overwrite: bool

Defaults to False. If True, any existing files matching your provided prefix_path will be overwritten. If False, an error will be returned if existing files are found.

str reponse message

ValueError

Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist.

RuntimeError

Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

TypeError

Raised if we receive an unknown arkouda_type returned from the server. Raised if the Index values are a list.

  • CSV format is not currently supported by load/load_all operations

  • The column delimiter is expected to be the same for column names and data

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (`

`) at this time.

to_dict(label)[source]
to_hdf(prefix_path: str, dataset: str = 'index', mode: str = 'truncate', file_type: str = 'distribute') str[source]

Save the Index to HDF5. The object can be saved to a collection of files or single file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:
  • RuntimeError – Raised if a server-side error is thrown saving the pdarray

  • TypeError – Raised if the Index values are a list.

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. Otherwise, the file name will be prefix_path. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

to_list()[source]
to_ndarray()[source]
to_pandas()[source]
to_parquet(prefix_path: str, dataset: str = 'index', mode: str = 'truncate', compression: str | None = None)[source]

Save the Index to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

Return type:

string message indicating result of save operation

Raises:
  • RuntimeError – Raised if a server-side error is thrown saving the pdarray

  • TypeError – Raised if the Index values are a list.

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - ‘append’ write mode is supported, but is not efficient. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

unregister()[source]

Unregister this Index object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

update_hdf(prefix_path: str, dataset: str = 'index', repack: bool = True)[source]

Overwrite the dataset with the name provided with this Index object. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

str - success message if successful

Raises:

RuntimeError – Raised if a server-side error is thrown saving the index

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

  • Because HDF5 deletes do not release memory, this will create a copy of the file with the new data

arkouda.LEN_SUFFIX = '_lengths'
class arkouda.LogLevel[source]

Bases: enum.Enum

Generic enumeration.

Derive from this class to define new enumerations.

CRITICAL = 'CRITICAL'
DEBUG = 'DEBUG'
ERROR = 'ERROR'
INFO = 'INFO'
WARN = 'WARN'
class arkouda.MultiIndex(values, name=None, names=None)[source]

Bases: Index

property index

This is maintained to support older code

objType = 'MultiIndex'
argsort(ascending=True)[source]
concat(other)[source]
is_registered()[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

lookup(key)[source]
memory_usage(unit='B')[source]

Return the memory usage of the MultiIndex values.

Parameters:

unit (str, default = "B") – Unit to return. One of {‘B’, ‘KB’, ‘MB’, ‘GB’}.

Returns:

Bytes of memory consumed.

Return type:

int

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> m = ak.index.MultiIndex([ak.array([1,2,3]),ak.array([4,5,6])])
>>> m.memory_usage()
48
register(user_defined_name)[source]

Register this Index object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the Index is to be registered under, this will be the root name for underlying components

Returns:

The same Index which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Indexes with the same name.

Return type:

MultiIndex

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the Index with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

set_dtype(dtype)[source]

Change the data type of the index

Currently only aku.ip_address and ak.array are supported.

to_dict(labels=None)[source]
to_hdf(prefix_path: str, dataset: str = 'index', mode: str = 'truncate', file_type: str = 'distribute') str[source]

Save the Index to HDF5. The object can be saved to a collection of files or single file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray.

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. Otherwise, the file name will be prefix_path. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

to_list()[source]
to_ndarray()[source]
to_pandas()[source]
unregister()[source]

Unregister this Index object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

update_hdf(prefix_path: str, dataset: str = 'index', repack: bool = True)[source]

Overwrite the dataset with the name provided with this Index object. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

str - success message if successful

Raises:
  • RuntimeError – Raised if a server-side error is thrown saving the index

  • TypeError – Raised if the Index values are a list.

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

  • Because HDF5 deletes do not release memory, this will create a copy of the file with the new data

exception arkouda.NonUniqueError[source]

Bases: ValueError

Inappropriate argument value (of correct type).

class arkouda.Power_divergenceResult[source]

Bases: namedtuple('Power_divergenceResult', ('statistic', 'pvalue'))

The results of a power divergence statistical test.

statistic
Type:

numpy.float64

pvalue
Type:

numpy.float64

class arkouda.Properties[source]
arkouda.RegisteredSymbols = '__RegisteredSymbols__'
exception arkouda.RegistrationError[source]

Bases: Exception

Error/Exception used when the Arkouda Server cannot register an object

exception arkouda.RegistrationError[source]

Bases: Exception

Error/Exception used when the Arkouda Server cannot register an object

exception arkouda.RegistrationError[source]

Bases: Exception

Error/Exception used when the Arkouda Server cannot register an object

exception arkouda.RegistrationError[source]

Bases: Exception

Error/Exception used when the Arkouda Server cannot register an object

exception arkouda.RegistrationError[source]

Bases: Exception

Error/Exception used when the Arkouda Server cannot register an object

class arkouda.Row(dict=None, /, **kwargs)[source]

Bases: collections.UserDict

This class is useful for printing and working with individual rows of a of an aku.DataFrame.

arkouda.SEG_SUFFIX = '_segments'
arkouda.ScalarDTypes
class arkouda.SegArray(segments, values, lengths=None, grouping=None)[source]
property grouping
property non_empty
objType = 'SegArray'
AND(x=None)[source]
OR(x=None)[source]
XOR(x=None)[source]
aggregate(op, x=None)[source]
all(x=None)[source]
any(x=None)[source]
append(other, axis=0)[source]

Append other to self, either vertically (axis=0, length of resulting SegArray increases), or horizontally (axis=1, each sub-array of other appends to the corresponding sub-array of self).

Parameters:
  • other (SegArray) – Array of sub-arrays to append

  • axis (0 or 1) – Whether to append vertically (0) or horizontally (1). If axis=1, other must be same size as self.

Returns:

axis=0: New SegArray containing all sub-arrays axis=1: New SegArray of same length, with pairs of sub-arrays concatenated

Return type:

SegArray

append_single(x, prepend=False)[source]

Append a single value to each sub-array.

Parameters:

x (pdarray or scalar) – Single value to append to each sub-array

Returns:

Copy of original SegArray with values from x appended to each sub-array

Return type:

SegArray

argmax(x=None)[source]
argmin(x=None)[source]
classmethod attach(user_defined_name)[source]

Using the defined name, attach to a SegArray that has been registered to the Symbol Table

Parameters:

user_defined_name (str) – user defined name which the SegArray object was registered under

Returns:

The resulting SegArray

Return type:

SegArray

Raises:

RuntimeError – Raised if the server could not attach to the SegArray object

classmethod concat(x, axis=0, ordered=True)[source]

Concatenate a sequence of SegArrays

Parameters:
  • x (sequence of SegArray) – The SegArrays to concatenate

  • axis (0 or 1) – Select vertical (0) or horizontal (1) concatenation. If axis=1, all SegArrays must have same size.

  • ordered (bool) – Must be True. This option is present for compatibility only, because unordered concatenation is not yet supported.

Returns:

The input arrays joined into one SegArray

Return type:

SegArray

copy()[source]

Return a deep copy.

filter(filter, discard_empty: bool = False)[source]

Filter values out of the SegArray object

Parameters:
  • filter (pdarray, list, or value) – The value/s to be filtered out of the SegArray

  • discard_empty (bool) – Defaults to False. When True, empty segments are removed from the return SegArray

Return type:

SegArray

classmethod from_multi_array(m)[source]

Construct a SegArray from a list of columns. This essentially transposes the input, resulting in an array of rows.

Parameters:

m (list of pdarray or Strings) – List of columns, the rows of which will form the sub-arrays of the output

Returns:

Array of rows of input

Return type:

SegArray

classmethod from_parts(segments, values, lengths=None, grouping=None) SegArray[source]

DEPRECATED Construct a SegArray object from its parts

Parameters:
  • segments (pdarray, int64) – Start index of each sub-array in the flattened values array

  • values (pdarray) – The flattened values of all sub-arrays

  • lengths (pdarray) – The length of each segment

  • grouping (GroupBy) – grouping of segments

Returns:

Data structure representing an array whose elements are variable-length arrays.

Return type:

SegArray

Notes

Keyword args ‘lengths’ and ‘grouping’ are not user-facing. They are used by the attach method.

classmethod from_return_msg(rep_msg) SegArray[source]
get_jth(j, return_origins=True, compressed=False, default=0)[source]

Select the j-th element of each sub-array, where possible.

Parameters:
  • j (int) – The index of the value to get from each sub-array. If j is negative, it counts backwards from the end of each sub-array.

  • return_origins (bool) – If True, return a logical index indicating where j is in bounds

  • compressed (bool) – If False, return array is same size as self, with default value where j is out of bounds. If True, the return array only contains values where j is in bounds.

  • default (scalar) – When compressed=False, the value to return when j is out of bounds for the sub-array

Returns:

  • val (pdarray) – compressed=False: The j-th value of each sub-array where j is in bounds and the default value where j is out of bounds. compressed=True: The j-th values of only the sub-arrays where j is in bounds

  • origin_indices (pdarray, bool) – A Boolean array that is True where j is in bounds for the sub-array.

Notes

If values are Strings, only the compressed format is supported.

get_length_n(n, return_origins=True)[source]

Return all sub-arrays of length n, as a list of columns.

Parameters:
  • n (int) – Length of sub-arrays to select

  • return_origins (bool) – Return a logical index indicating which sub-arrays are length n

Returns:

  • columns (list of pdarray) – An n-long list of pdarray, where each row is one of the n-long sub-arrays from the SegArray. The number of rows is the number of True values in the returned mask.

  • origin_indices (pdarray, bool) – Array of bool for each element of the SegArray, True where sub-array has length n.

get_ngrams(n, return_origins=True)[source]

Return all n-grams from all sub-arrays.

Parameters:
  • n (int) – Length of n-gram

  • return_origins (bool) – If True, return an int64 array indicating which sub-array each returned n-gram came from.

Returns:

  • ngrams (list of pdarray) – An n-long list of pdarrays, essentially a table where each row is an n-gram.

  • origin_indices (pdarray, int) – The index of the sub-array from which the corresponding n-gram originated

get_prefixes(n, return_origins=True, proper=True)[source]

Return all sub-array prefixes of length n (for sub-arrays that are at least n+1 long)

Parameters:
  • n (int) – Length of suffix

  • return_origins (bool) – If True, return a logical index indicating which sub-arrays were long enough to return an n-prefix

  • proper (bool) – If True, only return proper prefixes, i.e. from sub-arrays that are at least n+1 long. If False, allow the entire sub-array to be returned as a prefix.

Returns:

  • prefixes (list of pdarray) – An n-long list of pdarrays, essentially a table where each row is an n-prefix. The number of rows is the number of True values in the returned mask.

  • origin_indices (pdarray, bool) – Boolean array that is True where the sub-array was long enough to return an n-suffix, False otherwise.

get_suffixes(n, return_origins=True, proper=True)[source]

Return the n-long suffix of each sub-array, where possible

Parameters:
  • n (int) – Length of suffix

  • return_origins (bool) – If True, return a logical index indicating which sub-arrays were long enough to return an n-suffix

  • proper (bool) – If True, only return proper suffixes, i.e. from sub-arrays that are at least n+1 long. If False, allow the entire sub-array to be returned as a suffix.

Returns:

  • suffixes (list of pdarray) – An n-long list of pdarrays, essentially a table where each row is an n-suffix. The number of rows is the number of True values in the returned mask.

  • origin_indices (pdarray, bool) – Boolean array that is True where the sub-array was long enough to return an n-suffix, False otherwise.

hash() Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray][source]

Compute a 128-bit hash of each segment.

Returns:

A tuple of two int64 pdarrays. The ith hash value is the concatenation of the ith values from each array.

Return type:

Tuple[pdarray,pdarray]

intersect(other)[source]

Computes the intersection of 2 SegArrays.

Parameters:

other (SegArray) – SegArray to compute against

Returns:

Segments are the 1d intersections of the segments of self and other

Return type:

SegArray

Examples

>>> a = [1, 2, 3, 1, 4]
>>> b = [3, 1, 4, 5]
>>> c = [1, 3, 3, 5]
>>> d = [2, 2, 4]
>>> seg_a = ak.segarray(ak.array([0, len(a)]), ak.array(a+b))
>>> seg_b = ak.segarray(ak.array([0, len(c)]), ak.array(c+d))
>>> seg_a.intersect(seg_b)
SegArray([
[1, 3],
[4]
])
is_registered() bool[source]

Checks if the name of the SegArray object is registered in the Symbol Table

Returns:

True if SegArray is registered, false if not

Return type:

bool

classmethod load(prefix_path, dataset='segarray', segment_name='segments', value_name='values')[source]
max(x=None)[source]
mean(x=None)[source]
min(x=None)[source]
nunique(x=None)[source]
prepend_single(x)[source]
prod(x=None)[source]
classmethod read_hdf(prefix_path, dataset='segarray')[source]

Load a saved SegArray from HDF5. All arguments must match what was supplied to SegArray.save()

Parameters:
  • prefix_path (str) – Directory and filename prefix

  • dataset (str) – Name prefix for saved data within the HDF5 files

Return type:

SegArray

register(user_defined_name)[source]

Register this SegArray object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name which this SegArray object will be registered under

Returns:

The same SegArray which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different SegArrays with the same name.

Return type:

SegArray

Raises:

RegistrationError – Raised if the server could not register the SegArray object

Notes

Objects registered with the server are immune to deletion until they are unregistered.

remove_repeats(return_multiplicity=False)[source]

Condense sequences of repeated values within a sub-array to a single value.

Parameters:

return_multiplicity (bool) – If True, also return the number of times each value was repeated.

Returns:

  • norepeats (SegArray) – Sub-arrays with runs of repeated values replaced with single value

  • multiplicity (SegArray) – If return_multiplicity=True, this array contains the number of times each value in the returned SegArray was repeated in the original SegArray.

save(prefix_path, dataset='segarray', mode='truncate', file_type='distribute')[source]

DEPRECATED Save the SegArray to HDF5. The object can be saved to a collection of files or single file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. Otherwise, the file name will be prefix_path. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

See also

to_hdf, load

set_jth(i, j, v)[source]

Set the j-th element of each sub-array in a subset.

Parameters:
  • i (pdarray, int) – Indices of sub-arrays to set j-th element

  • j (int) – Index of value to set in each sub-array. If j is negative, it counts backwards from the end of the sub-array.

  • v (pdarray or scalar) – The value(s) to set. If v is a pdarray, it must have same length as i.

Raises:

ValueError – If j is out of bounds in any of the sub-arrays specified by i.

setdiff(other)[source]

Computes the set difference of 2 SegArrays.

Parameters:

other (SegArray) – SegArray to compute against

Returns:

Segments are the 1d set difference of the segments of self and other

Return type:

SegArray

Examples

>>> a = [1, 2, 3, 1, 4]
>>> b = [3, 1, 4, 5]
>>> c = [1, 3, 3, 5]
>>> d = [2, 2, 4]
>>> seg_a = ak.segarray(ak.array([0, len(a)]), ak.array(a+b))
>>> seg_b = ak.segarray(ak.array([0, len(c)]), ak.array(c+d))
>>> seg_a.setdiff(seg_b)
SegArray([
[2, 4],
[1, 3, 5]
])
setxor(other)[source]

Computes the symmetric difference of 2 SegArrays.

Parameters:

other (SegArray) – SegArray to compute against

Returns:

Segments are the 1d symmetric difference of the segments of self and other

Return type:

SegArray

Examples

>>> a = [1, 2, 3, 1, 4]
>>> b = [3, 1, 4, 5]
>>> c = [1, 3, 3, 5]
>>> d = [2, 2, 4]
>>> seg_a = ak.segarray(ak.array([0, len(a)]), ak.array(a+b))
>>> seg_b = ak.segarray(ak.array([0, len(c)]), ak.array(c+d))
>>> seg_a.setxor(seg_b)
SegArray([
[2, 4, 5],
[1, 3, 5, 2]
])
sum(x=None)[source]
to_hdf(prefix_path, dataset='segarray', mode='truncate', file_type='distribute')[source]

Save the SegArray to HDF5. The result is a collection of HDF5 files, one file per locale of the arkouda server, where each filename starts with prefix_path.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files will share

  • dataset (str) – Name prefix for saved data within the HDF5 file

  • mode (str {'truncate' | 'append'}) – By default, truncate (overwrite) output files, if they exist. If ‘append’, add data as a new column to existing files.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

None

See also

load

to_list()[source]

Convert the segarray into a list containing sub-arrays

Returns:

A list with the same sub-arrays (also list) as this segarray

Return type:

list

See also

to_ndarray

Examples

>>> segarr = ak.SegArray(ak.array([0, 4, 7]), ak.arange(12))
>>> segarr.to_list()
[[0, 1, 2, 3], [4, 5, 6], [7, 8, 9, 10, 11]]
>>> type(segarr.to_list())
list
to_ndarray()[source]

Convert the array into a numpy.ndarray containing sub-arrays

Returns:

A numpy ndarray with the same sub-arrays (also numpy.ndarray) as this array

Return type:

np.ndarray

See also

array, to_list

Examples

>>> segarr = ak.SegArray(ak.array([0, 4, 7]), ak.arange(12))
>>> segarr.to_ndarray()
array([array([1, 2, 3, 4]), array([5, 6, 7]), array([8, 9, 10, 11, 12])])
>>> type(segarr.to_ndarray())
numpy.ndarray
to_parquet(prefix_path, dataset='segarray', mode: str = 'truncate', compression: str | None = None)[source]

Save the SegArray object to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the object to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: Deprecated.

Parameter kept to maintain functionality of other calls. Only Truncate supported. By default, truncate (overwrite) output files, if they exist. If ‘append’, attempt to create new dataset in existing files.

Parameters:

compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

Return type:

string message indicating result of save operation

Raises:
  • RuntimeError – Raised if a server-side error is thrown saving the pdarray

  • ValueError – If write mode is not Truncate.

Notes

  • Append mode for Parquet has been deprecated. It was not implemented for SegArray.

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

transfer(hostname: str, port: arkouda.dtypes.int_scalars)[source]

Sends a Segmented Array to a different Arkouda server

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the Segmented Array is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Return type:

A message indicating a complete transfer

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

union(other)[source]

Computes the union of 2 SegArrays.

Parameters:

other (SegArray) – SegArray to compute against

Returns:

Segments are the 1d union of the segments of self and other

Return type:

SegArray

Examples

>>> a = [1, 2, 3, 1, 4]
>>> b = [3, 1, 4, 5]
>>> c = [1, 3, 3, 5]
>>> d = [2, 2, 4]
>>> seg_a = ak.segarray(ak.array([0, len(a)]), ak.array(a+b))
>>> seg_b = ak.segarray(ak.array([0, len(c)]), ak.array(c+d))
>>> seg_a.union(seg_b)
SegArray([
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]
])
unique(x=None)[source]

Return sub-arrays of unique values.

Parameters:

x (pdarray) – The values to unique, per group. By default, the values of this SegArray’s sub-arrays.

Returns:

Same number of sub-arrays as original SegArray, but elements in sub-array are unique and in sorted order.

Return type:

SegArray

unregister()[source]

Unregister this SegArray object in the arkouda server which was previously registered using register() and/or attached to using attach()

Return type:

None

Raises:

RuntimeError – Raised if the server could not unregister the SegArray object from the Symbol Table

Notes

Objects registered with the server are immune to deletion until they are unregistered.

static unregister_segarray_by_name(user_defined_name)[source]

Using the defined name, remove the registered SegArray object from the Symbol Table

Parameters:

user_defined_name (str) – user defined name which the SegArray object was registered under

Return type:

None

Raises:

RuntimeError – Raised if the server could not unregister the SegArray object from the Symbol Table

update_hdf(prefix_path: str, dataset: str = 'segarray', repack: bool = True)[source]

Overwrite the dataset with the name provided with this SegArray object. If the dataset does not exist it is added.

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown saving the SegArray

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

  • Because HDF5 deletes do not release memory, this will create a copy of the file with the new data

class arkouda.Series(data: Tuple | List | arkouda.groupbyclass.groupable_element_type, name=None, index: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | Tuple | List | arkouda.index.Index | None = None)[source]

One-dimensional arkouda array with axis labels.

Parameters:
  • index (pdarray, Strings) – an array of indices associated with the data array. If empty, it will default to a range of ints whose size match the size of the data. optional

  • data (Tuple, List, groupable_element_type) – a 1D array. Must not be None.

Raises:
  • TypeError – Raised if index is not a pdarray or Strings object Raised if data is not a pdarray, Strings, or Categorical object

  • ValueError – Raised if the index size does not match data size

Notes

The Series class accepts either positional arguments or keyword arguments. If entering positional arguments,

2 arguments entered:

argument 1 - data argument 2 - index

1 argument entered:

argument 1 - data

If entering 1 positional argument, it is assumed that this is the data argument. If only ‘data’ argument is passed in, Index will automatically be generated. If entering keywords,

‘data’ (see Parameters) ‘index’ (optional) must match size of ‘data’

property at: _LocIndexer

Accesses entries of a Series by label

Parameters:

key (pdarray, Strings, Series, list, supported_scalars) – The key or container of keys to access entries for

property iat: _iLocIndexer

Accesses entries of a Series by position

Parameters:

key (int) – The positions or container of positions to access entries for

property iloc: _iLocIndexer

Accesses entries of a Series by position

Parameters:

key (int) – The positions or container of positions to access entries for

property loc: _LocIndexer

Accesses entries of a Series by label

Parameters:

key (pdarray, Strings, Series, list, supported_scalars) – The key or container of keys to access entries for

property shape
dt
objType = 'Series'
str_acc
add(b: Series) Series[source]
static attach(label: str, nkeys: int = 1) Series[source]

DEPRECATED Retrieve a series registered with arkouda

Parameters:
  • label (name used to register the series)

  • nkeys (number of keys, if a multi-index was registerd)

static concat(arrays: List, axis: int = 0, index_labels: List[str] | None = None, value_labels: List[str] | None = None) arkouda.dataframe.DataFrame | Series[source]

Concatenate in arkouda a list of arkouda Series or grouped arkouda arrays horizontally or vertically. If a list of grouped arkouda arrays is passed they are converted to a series. Each grouping is a 2-tuple with the first item being the key(s) and the second being the value. If horizontal, each series or grouping must have the same length and the same index. The index of the series is converted to a column in the dataframe. If it is a multi-index,each level is converted to a column.

Parameters:
  • arrays (The list of series/groupings to concat.)

  • axis (Whether or not to do a verticle (axis=0) or horizontal (axis=1) concatenation)

  • index_labels (column names(s) to label the index.)

  • value_labels (column names to label values of each series.)

Returns:

  • axis=0 (an arkouda series.)

  • axis=1 (an arkouda dataframe.)

diff() Series[source]

Diffs consecutive values of the series.

Returns a new series with the same index and length. First value is set to NaN.

fillna(value) Series[source]

Fill NA/NaN values using the specified method.

Parameters:

value (scalar, Series, or pdarray) – Value to use to fill holes (e.g. 0), alternately a Series of values specifying which value to use for each index. Values not in the Series will not be filled. This value cannot be a list.

Returns:

Object with missing values filled.

Return type:

Series

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> from arkouda import Series
>>> data = ak.Series([1, np.nan, 3, np.nan, 5])
>>> data

0

0

1

1

nan

2

3

3

nan

4

5

>>> fill_values1 = ak.ones(5)
>>> data.fillna(fill_values1)

0

0

1

1

1

2

3

3

1

4

5

>>> fill_values2 = Series(ak.ones(5))
>>> data.fillna(fill_values2)

0

0

1

1

1

2

3

3

1

4

5

>>> fill_values3 = 100.0
>>> data.fillna(fill_values3)

0

0

1

1

100

2

3

3

100

4

5

classmethod from_return_msg(repMsg: str) Series[source]

Return a Series instance pointing to components created by the arkouda server. The user should not call this function directly.

Parameters:

repMsg (str) –

  • delimited string containing the values and indexes

Returns:

A Series representing a set of pdarray components on the server

Return type:

Series

Raises:

RuntimeError – Raised if a server-side error is thrown in the process of creating the Series instance

has_repeat_labels() bool[source]

Returns whether the Series has any labels that appear more than once

hasnans() bool[source]

Return True if there are any NaNs.

Return type:

bool

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> from arkouda import Series
>>> import numpy as np
>>> s = ak.Series(ak.array([1, 2, 3, np.nan]))
>>> s
>>> s.hasnans
True
head(n: int = 10) Series[source]

Return the first n values of the series

is_registered() bool[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

isin(lst: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | List) Series[source]

Find series elements whose values are in the specified list

Input

Either a python list or an arkouda array.

rtype:

Arkouda boolean which is true for elements that are in the list and false otherwise.

isna() Series[source]

Detect missing values.

Return a boolean same-sized object indicating if the values are NA. NA values, such as numpy.NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty strings ‘’ are not considered NA values.

Returns:

Mask of bool values for each element in Series that indicates whether an element is an NA value.

Return type:

arkouda.series.Series

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> from arkouda import Series
>>> import numpy as np
>>> s = Series(ak.array([1, 2, np.nan]), index = ak.array([1, 2, 4]))
>>> s.isna()

0

1

False

2

False

4

True

isnull() Series[source]

Series.isnull is an alias for Series.isna.

Detect missing values.

Return a boolean same-sized object indicating if the values are NA. NA values, such as numpy.NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty strings ‘’ are not considered NA values.

Returns:

Mask of bool values for each element in Series that indicates whether an element is an NA value.

Return type:

arkouda.series.Series

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> from arkouda import Series
>>> import numpy as np
>>> s = Series(ak.array([1, 2, np.nan]), index = ak.array([1, 2, 4]))
>>> s.isnull()

0

1

False

2

False

4

True

locate(key: int | arkouda.pdarrayclass.pdarray | arkouda.index.Index | Series | List | Tuple) Series[source]

Lookup values by index label

The input can be a scalar, a list of scalers, or a list of lists (if the series has a MultiIndex). As a special case, if a Series is used as the key, the series labels are preserved with its values use as the key.

Keys will be turned into arkouda arrays as needed.

Return type:

A Series containing the values corresponding to the key.

map(arg: dict | arkouda.Series) arkouda.Series[source]

Map values of Series according to an input mapping.

Parameters:

arg (dict or Series) – The mapping correspondence.

Returns:

A new series with the same index as the caller. When the input Series has Categorical values, the return Series will have Strings values. Otherwise, the return type will match the input type.

Return type:

arkouda.series.Series

Raises:

TypeError – Raised if arg is not of type dict or arkouda.Series. Raised if series values not of type pdarray, Categorical, or Strings.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> s = ak.Series(ak.array([2, 3, 2, 3, 4]))
>>> display(s)

0

0

2

1

3

2

2

3

3

4

4

>>> s.map({4: 25.0, 2: 30.0, 1: 7.0, 3: 5.0})

0

0

30.0

1

5.0

2

30.0

3

5.0

4

25.0

>>> s2 = ak.Series(ak.array(["a","b","c","d"]), index = ak.array([4,2,1,3]))
>>> s.map(s2)

0

0

b

1

b

2

d

3

d

4

a

memory_usage(index: bool = True, unit='B') int[source]

Return the memory usage of the Series.

The memory usage can optionally include the contribution of the index.

Parameters:
  • index (bool, default True) – Specifies whether to include the memory usage of the Series index.

  • unit (str, default = "B") – Unit to return. One of {‘B’, ‘KB’, ‘MB’, ‘GB’}.

Returns:

Bytes of memory consumed.

Return type:

int

Examples

>>> from arkouda.series import Series
>>> s = ak.Series(ak.arange(3))
>>> s.memory_usage()
48

Not including the index gives the size of the rest of the data, which is necessarily smaller:

>>> s.memory_usage(index=False)
24

Select the units:

>>> s = ak.Series(ak.arange(3000))
>>> s.memory_usage(unit="KB")
46.875
notna() Series[source]

Detect existing (non-missing) values.

Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to True. Characters such as empty strings ‘’ are not considered NA values. NA values, such as numpy.NaN, get mapped to False values.

Returns:

Mask of bool values for each element in Series that indicates whether an element is not an NA value.

Return type:

arkouda.series.Series

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> from arkouda import Series
>>> import numpy as np
>>> s = Series(ak.array([1, 2, np.nan]), index = ak.array([1, 2, 4]))
>>> s.notna()

0

1

True

2

True

4

False

notnull() Series[source]

Series.notnull is an alias for Series.notna.

Detect existing (non-missing) values.

Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to True. Characters such as empty strings ‘’ are not considered NA values. NA values, such as numpy.NaN, get mapped to False values.

Returns:

Mask of bool values for each element in Series that indicates whether an element is not an NA value.

Return type:

arkouda.series.Series

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> from arkouda import Series
>>> import numpy as np
>>> s = Series(ak.array([1, 2, np.nan]), index = ak.array([1, 2, 4]))
>>> s.notnull()

0

1

True

2

True

4

False

static pdconcat(arrays: List, axis: int = 0, labels: arkouda.strings.Strings | None = None) pandas.Series | pandas.DataFrame[source]

Concatenate a list of arkouda Series or grouped arkouda arrays, returning a PANDAS object.

If a list of grouped arkouda arrays is passed they are converted to a series. Each grouping is a 2-tuple with the first item being the key(s) and the second being the value.

If horizontal, each series or grouping must have the same length and the same index. The index of the series is converted to a column in the dataframe. If it is a multi-index,each level is converted to a column.

Parameters:
  • arrays (The list of series/groupings to concat.)

  • axis (Whether or not to do a verticle (axis=0) or horizontal (axis=1) concatenation)

  • labels (names to give the columns of the data frame.)

Returns:

  • axis=0 (a local PANDAS series)

  • axis=1 (a local PANDAS dataframe)

register(user_defined_name: str)[source]

Register this Series object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the Series is to be registered under, this will be the root name for underlying components

Returns:

The same Series which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Series with the same name.

Return type:

Series

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the Series with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

sort_index(ascending: bool = True) Series[source]

Sort the series by its index

Parameters:

ascending (bool) – Sort values in ascending (default) or descending order.

Return type:

A new Series sorted.

sort_values(ascending: bool = True) Series[source]

Sort the series numerically

Parameters:

ascending (bool) – Sort values in ascending (default) or descending order.

Return type:

A new Series sorted smallest to largest

tail(n: int = 10) Series[source]

Return the last n values of the series

to_dataframe(index_labels: List[str] | None = None, value_label: str | None = None) arkouda.dataframe.DataFrame[source]

Converts series to an arkouda data frame

Parameters:
  • index_labels (column names(s) to label the index.)

  • value_label (column name to label values.)

Return type:

An arkouda dataframe.

to_list() list[source]
to_markdown(mode='wt', index=True, tablefmt='grid', storage_options=None, **kwargs)[source]

Print Series in Markdown-friendly format.

Parameters:
  • mode (str, optional) – Mode in which file is opened, “wt” by default.

  • index (bool, optional, default True) – Add index (row) labels.

  • tablefmt (str = "grid") – Table format to call from tablulate: https://pypi.org/project/tabulate/

  • storage_options (dict, optional) – Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. An error will be raised if providing this argument with a non-fsspec URL. See the fsspec and backend storage implementation docs for the set of allowed keys and values.

  • **kwargs – These parameters will be passed to tabulate.

Note

This function should only be called on small Series as it calls pandas.Series.to_markdown: https://pandas.pydata.org/docs/reference/api/pandas.Series.to_markdown.html

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> s = ak.Series(["elk", "pig", "dog", "quetzal"], name="animal")
>>> print(s.to_markdown())
|    | animal   |
|---:|:---------|
|  0 | elk      |
|  1 | pig      |
|  2 | dog      |
|  3 | quetzal  |

Output markdown with a tabulate option.

>>> print(s.to_markdown(tablefmt="grid"))
+----+----------+
|    | animal   |
+====+==========+
|  0 | elk      |
+----+----------+
|  1 | pig      |
+----+----------+
|  2 | dog      |
+----+----------+
|  3 | quetzal  |
+----+----------+
to_pandas() pandas.Series[source]

Convert the series to a local PANDAS series

topn(n: int = 10) Series[source]

Return the top values of the series

Parameters:

n (Number of values to return)

Return type:

A new Series with the top values

unregister()[source]

Unregister this Series object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

validate_key(key: Series | arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical | List | supported_scalars) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical | supported_scalars[source]

Validates type requirements for keys when reading or writing the Series. Also converts list and tuple arguments into pdarrays.

Parameters:

key (Series, pdarray, Strings, Categorical, List, supported_scalars) – The key or container of keys that might be used to index into the Series.

Return type:

The validated key(s), with lists and tuples converted to pdarrays

Raises:
  • TypeError – Raised if keys are not boolean values or the type of the labels Raised if key is not one of the supported types

  • KeyError – Raised if container of keys has keys not present in the Series

  • IndexError – Raised if the length of a boolean key array is different from the Series

validate_val(val: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | supported_scalars | List) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | supported_scalars[source]

Validates type requirements for values being written into the Series. Also converts list and tuple arguments into pdarrays.

Parameters:

val (pdarray, Strings, list, supported_scalars) – The value or container of values that might be assigned into the Series.

Return type:

The validated value, with lists converted to pdarrays

Raises:

TypeError

Raised if val is not the same type or a container with elements

of the same time as the Series

Raised if val is a string or Strings type. Raised if val is not one of the supported types

value_counts(sort: bool = True) Series[source]

Return a Series containing counts of unique values.

The resulting object will be in descending order so that the first element is the most frequently-occurring element.

Parameters:

sort (Boolean. Whether or not to sort the results. Default is true.)

class arkouda.StringAccessor(series)[source]

Bases: Properties

class arkouda.Timedelta(pda, unit: str = _BASE_UNIT)[source]

Bases: _AbstractBaseTime

Represents a duration, the difference between two dates or times.

Timedelta is the Arkouda equivalent of pandas.TimedeltaIndex.

Parameters:
  • pda (int64 pdarray, pd.TimedeltaIndex, pd.Series, or np.timedelta64 array)

  • unit (str, default 'ns') –

    For int64 pdarray, denotes the unit of the input. Ignored for pandas and numpy arrays, which carry their own unit. Not case-sensitive; prefixes of full names (like ‘sec’) are accepted.

    Possible values:

    • ’weeks’ or ‘w’

    • ’days’ or ‘d’

    • ’hours’ or ‘h’

    • ’minutes’, ‘m’, or ‘t’

    • ’seconds’ or ‘s’

    • ’milliseconds’, ‘ms’, or ‘l’

    • ’microseconds’, ‘us’, or ‘u’

    • ’nanoseconds’, ‘ns’, or ‘n’

    Unlike in pandas, units cannot be combined or mixed with integers

Notes

The .values attribute is always in nanoseconds with int64 dtype.

property components
property days
property microseconds
property nanoseconds
property seconds
special_objType = 'Timedelta'
supported_opeq
supported_with_datetime
supported_with_pdarray
supported_with_r_datetime
supported_with_r_pdarray
supported_with_r_timedelta
supported_with_timedelta
abs()[source]

Absolute value of time interval.

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

register(user_defined_name)[source]

Register this Timedelta object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the timedelta is to be registered under, this will be the root name for underlying components

Returns:

The same Timedelta which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Timedeltas with the same name.

Return type:

Timedelta

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the timedelta with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

std(ddof: arkouda.dtypes.int_scalars = 0)[source]

Returns the standard deviation as a pd.Timedelta object

sum()[source]

Return the sum of all elements in the array.

to_pandas()[source]

Convert array to a pandas TimedeltaIndex. Note: if the array size exceeds client.maxTransferBytes, a RuntimeError is raised.

See also

to_ndarray

total_seconds()[source]
unregister()[source]

Unregister this timedelta object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

class arkouda.Timedelta(pda, unit: str = _BASE_UNIT)[source]

Bases: _AbstractBaseTime

Represents a duration, the difference between two dates or times.

Timedelta is the Arkouda equivalent of pandas.TimedeltaIndex.

Parameters:
  • pda (int64 pdarray, pd.TimedeltaIndex, pd.Series, or np.timedelta64 array)

  • unit (str, default 'ns') –

    For int64 pdarray, denotes the unit of the input. Ignored for pandas and numpy arrays, which carry their own unit. Not case-sensitive; prefixes of full names (like ‘sec’) are accepted.

    Possible values:

    • ’weeks’ or ‘w’

    • ’days’ or ‘d’

    • ’hours’ or ‘h’

    • ’minutes’, ‘m’, or ‘t’

    • ’seconds’ or ‘s’

    • ’milliseconds’, ‘ms’, or ‘l’

    • ’microseconds’, ‘us’, or ‘u’

    • ’nanoseconds’, ‘ns’, or ‘n’

    Unlike in pandas, units cannot be combined or mixed with integers

Notes

The .values attribute is always in nanoseconds with int64 dtype.

property components
property days
property microseconds
property nanoseconds
property seconds
special_objType = 'Timedelta'
supported_opeq
supported_with_datetime
supported_with_pdarray
supported_with_r_datetime
supported_with_r_pdarray
supported_with_r_timedelta
supported_with_timedelta
abs()[source]

Absolute value of time interval.

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry or is a component of a registered object.

Returns:

Indicates if the object is contained in the registry

Return type:

numpy.bool

Raises:

RegistrationError – Raised if there’s a server-side error or a mis-match of registered components

Notes

Objects registered with the server are immune to deletion until they are unregistered.

register(user_defined_name)[source]

Register this Timedelta object and underlying components with the Arkouda server

Parameters:

user_defined_name (str) – user defined name the timedelta is to be registered under, this will be the root name for underlying components

Returns:

The same Timedelta which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different Timedeltas with the same name.

Return type:

Timedelta

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the timedelta with the user_defined_name

Notes

Objects registered with the server are immune to deletion until they are unregistered.

std(ddof: arkouda.dtypes.int_scalars = 0)[source]

Returns the standard deviation as a pd.Timedelta object

sum()[source]

Return the sum of all elements in the array.

to_pandas()[source]

Convert array to a pandas TimedeltaIndex. Note: if the array size exceeds client.maxTransferBytes, a RuntimeError is raised.

See also

to_ndarray

total_seconds()[source]
unregister()[source]

Unregister this timedelta object in the arkouda server which was previously registered using register() and/or attached to using attach()

Raises:

RegistrationError – If the object is already unregistered or if there is a server error when attempting to unregister

Notes

Objects registered with the server are immune to deletion until they are unregistered.

arkouda.VAL_SUFFIX = '_values'
arkouda.abs(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise absolute value of the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing absolute values of the input array elements

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Examples

>>> ak.abs(ak.arange(-5,-1))
array([5, 4, 3, 2])
>>> ak.abs(ak.linspace(-5,-1,5))
array([5, 4, 3, 2, 1])
arkouda.akabs(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray

Return the element-wise absolute value of the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing absolute values of the input array elements

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Examples

>>> ak.abs(ak.arange(-5,-1))
array([5, 4, 3, 2])
>>> ak.abs(ak.linspace(-5,-1,5))
array([5, 4, 3, 2, 1])
arkouda.akbool
arkouda.akbool
arkouda.akcast(pda: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical, dt: numpy.dtype | type | str | arkouda.dtypes.BigInt, errors: ErrorMode = ErrorMode.strict) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical | Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray]

Cast an array to another dtype.

Parameters:
  • pda (pdarray or Strings) – The array of values to cast

  • dt (np.dtype, type, or str) – The target dtype to cast values to

  • errors ({strict, ignore, return_validity}) –

    Controls how errors are handled when casting strings to a numeric type (ignored for casts from numeric types).

    • strict: raise RuntimeError if any string cannot be converted

    • ignore: never raise an error. Uninterpretable strings get

      converted to NaN (float64), -2**63 (int64), zero (uint64 and uint8), or False (bool)

    • return_validity: in addition to returning the same output as “ignore”, also return a bool array indicating where the cast was successful.

Returns:

  • pdarray or Strings – Array of values cast to desired dtype

  • [validity (pdarray(bool)]) – If errors=”return_validity” and input is Strings, a second array is returned with True where the cast succeeded and False where it failed.

Notes

The cast is performed according to Chapel’s casting rules and is NOT safe from overflows or underflows. The user must ensure that the target dtype has the precision and capacity to hold the desired result.

Examples

>>> ak.cast(ak.linspace(1.0,5.0,5), dt=ak.int64)
array([1, 2, 3, 4, 5])
>>> ak.cast(ak.arange(0,5), dt=ak.float64).dtype
dtype('float64')
>>> ak.cast(ak.arange(0,5), dt=ak.bool)
array([False, True, True, True, True])
>>> ak.cast(ak.linspace(0,4,5), dt=ak.bool)
array([False, True, True, True, True])
arkouda.akcast(pda: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical, dt: numpy.dtype | type | str | arkouda.dtypes.BigInt, errors: ErrorMode = ErrorMode.strict) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical | Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray]

Cast an array to another dtype.

Parameters:
  • pda (pdarray or Strings) – The array of values to cast

  • dt (np.dtype, type, or str) – The target dtype to cast values to

  • errors ({strict, ignore, return_validity}) –

    Controls how errors are handled when casting strings to a numeric type (ignored for casts from numeric types).

    • strict: raise RuntimeError if any string cannot be converted

    • ignore: never raise an error. Uninterpretable strings get

      converted to NaN (float64), -2**63 (int64), zero (uint64 and uint8), or False (bool)

    • return_validity: in addition to returning the same output as “ignore”, also return a bool array indicating where the cast was successful.

Returns:

  • pdarray or Strings – Array of values cast to desired dtype

  • [validity (pdarray(bool)]) – If errors=”return_validity” and input is Strings, a second array is returned with True where the cast succeeded and False where it failed.

Notes

The cast is performed according to Chapel’s casting rules and is NOT safe from overflows or underflows. The user must ensure that the target dtype has the precision and capacity to hold the desired result.

Examples

>>> ak.cast(ak.linspace(1.0,5.0,5), dt=ak.int64)
array([1, 2, 3, 4, 5])
>>> ak.cast(ak.arange(0,5), dt=ak.float64).dtype
dtype('float64')
>>> ak.cast(ak.arange(0,5), dt=ak.bool)
array([False, True, True, True, True])
>>> ak.cast(ak.linspace(0,4,5), dt=ak.bool)
array([False, True, True, True, True])
arkouda.akfloat64
arkouda.akfloat64
arkouda.akint64
arkouda.akint64
arkouda.akint64
arkouda.akuint64
arkouda.akuint64
arkouda.akuint64
arkouda.align(*args)[source]

Map multiple arrays of sparse identifiers to a common 0-up index.

Parameters:

*args (pdarrays or sequences of pdarrays) – Arrays to map to dense index

Returns:

aligned – Arrays with values replaced by 0-up indices

Return type:

list of pdarrays

arkouda.all(pda: pdarray) numpy.bool_[source]

Return True iff all elements of the array evaluate to True.

Parameters:

pda (pdarray) – The pdarray instance to be evaluated

Returns:

Indicates if all pdarray elements evaluate to True

Return type:

bool

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

arkouda.all_scalars

The DType enum defines the supported Arkouda data types in string form.

arkouda.any(pda: pdarray) numpy.bool_[source]

Return True iff any element of the array evaluates to True.

Parameters:

pda (pdarray) – The pdarray instance to be evaluated

Returns:

Indicates if 1..n pdarray elements evaluate to True

Return type:

bool

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

arkouda.arange(*args, **kwargs) arkouda.pdarrayclass.pdarray[source]

arange([start,] stop[, stride,] dtype=int64)

Create a pdarray of consecutive integers within the interval [start, stop). If only one arg is given then arg is the stop parameter. If two args are given, then the first arg is start and second is stop. If three args are given, then the first arg is start, second is stop, third is stride.

The return value is cast to type dtype

Parameters:
  • start (int_scalars, optional) – Starting value (inclusive)

  • stop (int_scalars) – Stopping value (exclusive)

  • stride (int_scalars, optional) – The difference between consecutive elements, the default stride is 1, if stride is specified then start must also be specified.

  • dtype (np.dtype, type, or str) – The target dtype to cast values to

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Integers from start (inclusive) to stop (exclusive) by stride

Return type:

pdarray, dtype

Raises:
  • TypeError – Raised if start, stop, or stride is not an int object

  • ZeroDivisionError – Raised if stride == 0

See also

linspace, zeros, ones, randint

Notes

Negative strides result in decreasing values. Currently, only int64 pdarrays can be created with this method. For float64 arrays, use the linspace method.

Examples

>>> ak.arange(0, 5, 1)
array([0, 1, 2, 3, 4])
>>> ak.arange(5, 0, -1)
array([5, 4, 3, 2, 1])
>>> ak.arange(0, 10, 2)
array([0, 2, 4, 6, 8])
>>> ak.arange(-5, -10, -1)
array([-5, -6, -7, -8, -9])
arkouda.arange(*args, **kwargs) arkouda.pdarrayclass.pdarray[source]

arange([start,] stop[, stride,] dtype=int64)

Create a pdarray of consecutive integers within the interval [start, stop). If only one arg is given then arg is the stop parameter. If two args are given, then the first arg is start and second is stop. If three args are given, then the first arg is start, second is stop, third is stride.

The return value is cast to type dtype

Parameters:
  • start (int_scalars, optional) – Starting value (inclusive)

  • stop (int_scalars) – Stopping value (exclusive)

  • stride (int_scalars, optional) – The difference between consecutive elements, the default stride is 1, if stride is specified then start must also be specified.

  • dtype (np.dtype, type, or str) – The target dtype to cast values to

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Integers from start (inclusive) to stop (exclusive) by stride

Return type:

pdarray, dtype

Raises:
  • TypeError – Raised if start, stop, or stride is not an int object

  • ZeroDivisionError – Raised if stride == 0

See also

linspace, zeros, ones, randint

Notes

Negative strides result in decreasing values. Currently, only int64 pdarrays can be created with this method. For float64 arrays, use the linspace method.

Examples

>>> ak.arange(0, 5, 1)
array([0, 1, 2, 3, 4])
>>> ak.arange(5, 0, -1)
array([5, 4, 3, 2, 1])
>>> ak.arange(0, 10, 2)
array([0, 2, 4, 6, 8])
>>> ak.arange(-5, -10, -1)
array([-5, -6, -7, -8, -9])
arkouda.arange(*args, **kwargs) arkouda.pdarrayclass.pdarray[source]

arange([start,] stop[, stride,] dtype=int64)

Create a pdarray of consecutive integers within the interval [start, stop). If only one arg is given then arg is the stop parameter. If two args are given, then the first arg is start and second is stop. If three args are given, then the first arg is start, second is stop, third is stride.

The return value is cast to type dtype

Parameters:
  • start (int_scalars, optional) – Starting value (inclusive)

  • stop (int_scalars) – Stopping value (exclusive)

  • stride (int_scalars, optional) – The difference between consecutive elements, the default stride is 1, if stride is specified then start must also be specified.

  • dtype (np.dtype, type, or str) – The target dtype to cast values to

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Integers from start (inclusive) to stop (exclusive) by stride

Return type:

pdarray, dtype

Raises:
  • TypeError – Raised if start, stop, or stride is not an int object

  • ZeroDivisionError – Raised if stride == 0

See also

linspace, zeros, ones, randint

Notes

Negative strides result in decreasing values. Currently, only int64 pdarrays can be created with this method. For float64 arrays, use the linspace method.

Examples

>>> ak.arange(0, 5, 1)
array([0, 1, 2, 3, 4])
>>> ak.arange(5, 0, -1)
array([5, 4, 3, 2, 1])
>>> ak.arange(0, 10, 2)
array([0, 2, 4, 6, 8])
>>> ak.arange(-5, -10, -1)
array([-5, -6, -7, -8, -9])
arkouda.arange(*args, **kwargs) arkouda.pdarrayclass.pdarray[source]

arange([start,] stop[, stride,] dtype=int64)

Create a pdarray of consecutive integers within the interval [start, stop). If only one arg is given then arg is the stop parameter. If two args are given, then the first arg is start and second is stop. If three args are given, then the first arg is start, second is stop, third is stride.

The return value is cast to type dtype

Parameters:
  • start (int_scalars, optional) – Starting value (inclusive)

  • stop (int_scalars) – Stopping value (exclusive)

  • stride (int_scalars, optional) – The difference between consecutive elements, the default stride is 1, if stride is specified then start must also be specified.

  • dtype (np.dtype, type, or str) – The target dtype to cast values to

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Integers from start (inclusive) to stop (exclusive) by stride

Return type:

pdarray, dtype

Raises:
  • TypeError – Raised if start, stop, or stride is not an int object

  • ZeroDivisionError – Raised if stride == 0

See also

linspace, zeros, ones, randint

Notes

Negative strides result in decreasing values. Currently, only int64 pdarrays can be created with this method. For float64 arrays, use the linspace method.

Examples

>>> ak.arange(0, 5, 1)
array([0, 1, 2, 3, 4])
>>> ak.arange(5, 0, -1)
array([5, 4, 3, 2, 1])
>>> ak.arange(0, 10, 2)
array([0, 2, 4, 6, 8])
>>> ak.arange(-5, -10, -1)
array([-5, -6, -7, -8, -9])
arkouda.arange(*args, **kwargs) arkouda.pdarrayclass.pdarray[source]

arange([start,] stop[, stride,] dtype=int64)

Create a pdarray of consecutive integers within the interval [start, stop). If only one arg is given then arg is the stop parameter. If two args are given, then the first arg is start and second is stop. If three args are given, then the first arg is start, second is stop, third is stride.

The return value is cast to type dtype

Parameters:
  • start (int_scalars, optional) – Starting value (inclusive)

  • stop (int_scalars) – Stopping value (exclusive)

  • stride (int_scalars, optional) – The difference between consecutive elements, the default stride is 1, if stride is specified then start must also be specified.

  • dtype (np.dtype, type, or str) – The target dtype to cast values to

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Integers from start (inclusive) to stop (exclusive) by stride

Return type:

pdarray, dtype

Raises:
  • TypeError – Raised if start, stop, or stride is not an int object

  • ZeroDivisionError – Raised if stride == 0

See also

linspace, zeros, ones, randint

Notes

Negative strides result in decreasing values. Currently, only int64 pdarrays can be created with this method. For float64 arrays, use the linspace method.

Examples

>>> ak.arange(0, 5, 1)
array([0, 1, 2, 3, 4])
>>> ak.arange(5, 0, -1)
array([5, 4, 3, 2, 1])
>>> ak.arange(0, 10, 2)
array([0, 2, 4, 6, 8])
>>> ak.arange(-5, -10, -1)
array([-5, -6, -7, -8, -9])
arkouda.arange(*args, **kwargs) arkouda.pdarrayclass.pdarray[source]

arange([start,] stop[, stride,] dtype=int64)

Create a pdarray of consecutive integers within the interval [start, stop). If only one arg is given then arg is the stop parameter. If two args are given, then the first arg is start and second is stop. If three args are given, then the first arg is start, second is stop, third is stride.

The return value is cast to type dtype

Parameters:
  • start (int_scalars, optional) – Starting value (inclusive)

  • stop (int_scalars) – Stopping value (exclusive)

  • stride (int_scalars, optional) – The difference between consecutive elements, the default stride is 1, if stride is specified then start must also be specified.

  • dtype (np.dtype, type, or str) – The target dtype to cast values to

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Integers from start (inclusive) to stop (exclusive) by stride

Return type:

pdarray, dtype

Raises:
  • TypeError – Raised if start, stop, or stride is not an int object

  • ZeroDivisionError – Raised if stride == 0

See also

linspace, zeros, ones, randint

Notes

Negative strides result in decreasing values. Currently, only int64 pdarrays can be created with this method. For float64 arrays, use the linspace method.

Examples

>>> ak.arange(0, 5, 1)
array([0, 1, 2, 3, 4])
>>> ak.arange(5, 0, -1)
array([5, 4, 3, 2, 1])
>>> ak.arange(0, 10, 2)
array([0, 2, 4, 6, 8])
>>> ak.arange(-5, -10, -1)
array([-5, -6, -7, -8, -9])
arkouda.arccos(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Return the element-wise inverse cosine of the array. The result is between 0 and pi.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the inverse cosine will be applied to the corresponding value. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing inverse cosine for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.arccosh(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Return the element-wise inverse hyperbolic cosine of the array.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the inverse hyperbolic cosine will be applied to the corresponding value. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing inverse hyperbolic cosine for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.arcsin(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Return the element-wise inverse sine of the array. The result is between -pi/2 and pi/2.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the inverse sine will be applied to the corresponding value. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing inverse sine for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.arcsinh(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Return the element-wise inverse hyperbolic sine of the array.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the inverse hyperbolic sine will be applied to the corresponding value. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing inverse hyperbolic sine for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.arctan(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Return the element-wise inverse tangent of the array. The result is between -pi/2 and pi/2.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the inverse tangent will be applied to the corresponding value. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing inverse tangent for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.arctan2(num: arkouda.pdarrayclass.pdarray | arkouda.dtypes.numeric_scalars, denom: arkouda.pdarrayclass.pdarray | arkouda.dtypes.numeric_scalars, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Return the element-wise inverse tangent of the array pair. The result chosen is the signed angle in radians between the ray ending at the origin and passing through the point (1,0), and the ray ending at the origin and passing through the point (denom, num). The result is between -pi and pi.

Parameters:
  • num (Union[numeric_scalars, pdarray]) – Numerator of the arctan2 argument.

  • denom (Union[numeric_scalars, pdarray]) – Denominator of the arctan2 argument.

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the inverse tangent will be applied to the corresponding values. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing inverse tangent for each corresponding element pair of the original pdarray, using the signed values or the numerator and denominator to get proper placement on unit circle.

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.arctanh(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Return the element-wise inverse hyperbolic tangent of the array.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the inverse hyperbolic tangent will be applied to the corresponding value. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing inverse hyperbolic tangent for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameters are not a pdarray or numeric scalar.

arkouda.argmax(pda: pdarray) numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array max value.

Parameters:

pda (pdarray) – Values for which to calculate the argmax

Returns:

The index of the argmax calculated from the pda

Return type:

Union[np.int64, np.uint64]

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

arkouda.argmaxk(pda: pdarray, k: arkouda.dtypes.int_scalars) pdarray[source]

Find the indices corresponding to the k maximum values of an array.

Returns the largest k values of an array, sorted

Parameters:
  • pda (pdarray) – Input array.

  • k (int_scalars) – The desired count of indices corresponding to maxmum array values

Returns:

The indices of the maximum k values from the pda, sorted

Return type:

pdarray, int

Raises:
  • TypeError – Raised if pda is not a pdarray or k is not an integer

  • ValueError – Raised if the pda is empty or k < 1

Notes

This call is equivalent in value to:

ak.argsort(a)[k:]

and generally outperforms this operation.

This reduction will see a significant drop in performance as k grows beyond a certain value. This value is system dependent, but generally about a k of 5 million is where performance degradation has been observed.

Examples

>>> A = ak.array([10,5,1,3,7,2,9,0])
>>> ak.argmaxk(A, 3)
array([4, 6, 0])
>>> ak.argmaxk(A, 4)
array([1, 4, 6, 0])
arkouda.argmin(pda: pdarray) numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array min value.

Parameters:

pda (pdarray) – Values for which to calculate the argmin

Returns:

The index of the argmin calculated from the pda

Return type:

Union[np.int64, np.uint64]

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

arkouda.argmink(pda: pdarray, k: arkouda.dtypes.int_scalars) pdarray[source]

Finds the indices corresponding to the k minimum values of an array.

Parameters:
  • pda (pdarray) – Input array.

  • k (int_scalars) – The desired count of indices corresponding to minimum array values

Returns:

The indices of the minimum k values from the pda, sorted

Return type:

pdarray, int

Raises:
  • TypeError – Raised if pda is not a pdarray or k is not an integer

  • ValueError – Raised if the pda is empty or k < 1

Notes

This call is equivalent in value to:

ak.argsort(a)[:k]

and generally outperforms this operation.

This reduction will see a significant drop in performance as k grows beyond a certain value. This value is system dependent, but generally about a k of 5 million is where performance degradation has been observed.

Examples

>>> A = ak.array([10,5,1,3,7,2,9,0])
>>> ak.argmink(A, 3)
array([7, 2, 5])
>>> ak.argmink(A, 4)
array([7, 2, 5, 3])
arkouda.argsort(pda: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical, algorithm: SortingAlgorithm = SortingAlgorithm.RadixSortLSD, axis: arkouda.dtypes.int_scalars = 0) arkouda.pdarrayclass.pdarray[source]

Return the permutation that sorts the array.

Parameters:

pda (pdarray or Strings or Categorical) – The array to sort (int64, uint64, or float64)

Returns:

The indices such that pda[indices] is sorted

Return type:

pdarray, int64

Raises:

TypeError – Raised if the parameter is other than a pdarray or Strings

See also

coargsort

Notes

Uses a least-significant-digit radix sort, which is stable and resilient to non-uniformity in data but communication intensive.

Examples

>>> a = ak.randint(0, 10, 10)
>>> perm = ak.argsort(a)
>>> a[perm]
array([0, 1, 1, 3, 4, 5, 7, 8, 8, 9])
arkouda.argsort(pda: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical, algorithm: SortingAlgorithm = SortingAlgorithm.RadixSortLSD, axis: arkouda.dtypes.int_scalars = 0) arkouda.pdarrayclass.pdarray[source]

Return the permutation that sorts the array.

Parameters:

pda (pdarray or Strings or Categorical) – The array to sort (int64, uint64, or float64)

Returns:

The indices such that pda[indices] is sorted

Return type:

pdarray, int64

Raises:

TypeError – Raised if the parameter is other than a pdarray or Strings

See also

coargsort

Notes

Uses a least-significant-digit radix sort, which is stable and resilient to non-uniformity in data but communication intensive.

Examples

>>> a = ak.randint(0, 10, 10)
>>> perm = ak.argsort(a)
>>> a[perm]
array([0, 1, 1, 3, 4, 5, 7, 8, 8, 9])
arkouda.argsort(pda: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical, algorithm: SortingAlgorithm = SortingAlgorithm.RadixSortLSD, axis: arkouda.dtypes.int_scalars = 0) arkouda.pdarrayclass.pdarray[source]

Return the permutation that sorts the array.

Parameters:

pda (pdarray or Strings or Categorical) – The array to sort (int64, uint64, or float64)

Returns:

The indices such that pda[indices] is sorted

Return type:

pdarray, int64

Raises:

TypeError – Raised if the parameter is other than a pdarray or Strings

See also

coargsort

Notes

Uses a least-significant-digit radix sort, which is stable and resilient to non-uniformity in data but communication intensive.

Examples

>>> a = ak.randint(0, 10, 10)
>>> perm = ak.argsort(a)
>>> a[perm]
array([0, 1, 1, 3, 4, 5, 7, 8, 8, 9])
arkouda.array(a: arkouda.pdarrayclass.pdarray | numpy.ndarray | Iterable, dtype: numpy.dtype | type | str | None = None, max_bits: int = -1) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Convert a Python or Numpy Iterable to a pdarray or Strings object, sending the corresponding data to the arkouda server.

Parameters:
  • a (Union[pdarray, np.ndarray]) – Rank-1 array of a supported dtype

  • dtype (np.dtype, type, or str) – The target dtype to cast values to

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

A pdarray instance stored on arkouda server or Strings instance, which is composed of two pdarrays stored on arkouda server

Return type:

pdarray or Strings

Raises:
  • TypeError – Raised if a is not a pdarray, np.ndarray, or Python Iterable such as a list, array, tuple, or deque

  • RuntimeError – Raised if a is not one-dimensional, nbytes > maxTransferBytes, a.dtype is not supported (not in DTypes), or if the product of a size and a.itemsize > maxTransferBytes

  • ValueError – Raised if the returned message is malformed or does not contain the fields required to generate the array.

Notes

The number of bytes in the input array cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overwhelming the connection between the Python client and the arkouda server, under the assumption that it is a low-bandwidth connection. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but should proceed with caution.

If the pdrray or ndarray is of type U, this method is called twice recursively to create the Strings object and the two corresponding pdarrays for string bytes and offsets, respectively.

Examples

>>> ak.array(np.arange(1,10))
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> ak.array(range(1,10))
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> strings = ak.array([f'string {i}' for i in range(0,5)])
>>> type(strings)
<class 'arkouda.strings.Strings'>
arkouda.array(a: arkouda.pdarrayclass.pdarray | numpy.ndarray | Iterable, dtype: numpy.dtype | type | str | None = None, max_bits: int = -1) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Convert a Python or Numpy Iterable to a pdarray or Strings object, sending the corresponding data to the arkouda server.

Parameters:
  • a (Union[pdarray, np.ndarray]) – Rank-1 array of a supported dtype

  • dtype (np.dtype, type, or str) – The target dtype to cast values to

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

A pdarray instance stored on arkouda server or Strings instance, which is composed of two pdarrays stored on arkouda server

Return type:

pdarray or Strings

Raises:
  • TypeError – Raised if a is not a pdarray, np.ndarray, or Python Iterable such as a list, array, tuple, or deque

  • RuntimeError – Raised if a is not one-dimensional, nbytes > maxTransferBytes, a.dtype is not supported (not in DTypes), or if the product of a size and a.itemsize > maxTransferBytes

  • ValueError – Raised if the returned message is malformed or does not contain the fields required to generate the array.

Notes

The number of bytes in the input array cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overwhelming the connection between the Python client and the arkouda server, under the assumption that it is a low-bandwidth connection. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but should proceed with caution.

If the pdrray or ndarray is of type U, this method is called twice recursively to create the Strings object and the two corresponding pdarrays for string bytes and offsets, respectively.

Examples

>>> ak.array(np.arange(1,10))
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> ak.array(range(1,10))
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> strings = ak.array([f'string {i}' for i in range(0,5)])
>>> type(strings)
<class 'arkouda.strings.Strings'>
arkouda.array(a: arkouda.pdarrayclass.pdarray | numpy.ndarray | Iterable, dtype: numpy.dtype | type | str | None = None, max_bits: int = -1) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Convert a Python or Numpy Iterable to a pdarray or Strings object, sending the corresponding data to the arkouda server.

Parameters:
  • a (Union[pdarray, np.ndarray]) – Rank-1 array of a supported dtype

  • dtype (np.dtype, type, or str) – The target dtype to cast values to

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

A pdarray instance stored on arkouda server or Strings instance, which is composed of two pdarrays stored on arkouda server

Return type:

pdarray or Strings

Raises:
  • TypeError – Raised if a is not a pdarray, np.ndarray, or Python Iterable such as a list, array, tuple, or deque

  • RuntimeError – Raised if a is not one-dimensional, nbytes > maxTransferBytes, a.dtype is not supported (not in DTypes), or if the product of a size and a.itemsize > maxTransferBytes

  • ValueError – Raised if the returned message is malformed or does not contain the fields required to generate the array.

Notes

The number of bytes in the input array cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overwhelming the connection between the Python client and the arkouda server, under the assumption that it is a low-bandwidth connection. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but should proceed with caution.

If the pdrray or ndarray is of type U, this method is called twice recursively to create the Strings object and the two corresponding pdarrays for string bytes and offsets, respectively.

Examples

>>> ak.array(np.arange(1,10))
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> ak.array(range(1,10))
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> strings = ak.array([f'string {i}' for i in range(0,5)])
>>> type(strings)
<class 'arkouda.strings.Strings'>
arkouda.array(a: arkouda.pdarrayclass.pdarray | numpy.ndarray | Iterable, dtype: numpy.dtype | type | str | None = None, max_bits: int = -1) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Convert a Python or Numpy Iterable to a pdarray or Strings object, sending the corresponding data to the arkouda server.

Parameters:
  • a (Union[pdarray, np.ndarray]) – Rank-1 array of a supported dtype

  • dtype (np.dtype, type, or str) – The target dtype to cast values to

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

A pdarray instance stored on arkouda server or Strings instance, which is composed of two pdarrays stored on arkouda server

Return type:

pdarray or Strings

Raises:
  • TypeError – Raised if a is not a pdarray, np.ndarray, or Python Iterable such as a list, array, tuple, or deque

  • RuntimeError – Raised if a is not one-dimensional, nbytes > maxTransferBytes, a.dtype is not supported (not in DTypes), or if the product of a size and a.itemsize > maxTransferBytes

  • ValueError – Raised if the returned message is malformed or does not contain the fields required to generate the array.

Notes

The number of bytes in the input array cannot exceed ak.client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overwhelming the connection between the Python client and the arkouda server, under the assumption that it is a low-bandwidth connection. The user may override this limit by setting ak.client.maxTransferBytes to a larger value, but should proceed with caution.

If the pdrray or ndarray is of type U, this method is called twice recursively to create the Strings object and the two corresponding pdarrays for string bytes and offsets, respectively.

Examples

>>> ak.array(np.arange(1,10))
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> ak.array(range(1,10))
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> strings = ak.array([f'string {i}' for i in range(0,5)])
>>> type(strings)
<class 'arkouda.strings.Strings'>
arkouda.attach(name: str)[source]
arkouda.attach_all(names: list)[source]

Attach to all objects registered with the names provide

Parameters:

names (list) – List of names to attach to

Return type:

dict

arkouda.attach_pdarray(user_defined_name: str) pdarray[source]

class method to return a pdarray attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which array was registered under

Returns:

pdarray which is bound to the corresponding server side component which was registered with user_defined_name

Return type:

pdarray

Raises:

TypeError – Raised if user_defined_name is not a str

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.attach_pdarray("my_zeros")
>>> # ...other work...
>>> b.unregister()
arkouda.bigint
arkouda.bigint
arkouda.bigint_from_uint_arrays(arrays, max_bits=-1)[source]

Create a bigint pdarray from an iterable of uint pdarrays. The first item in arrays will be the highest 64 bits and the last item will be the lowest 64 bits.

Parameters:
  • arrays (Sequence[pdarray]) – An iterable of uint pdarrays used to construct the bigint pdarray. The first item in arrays will be the highest 64 bits and the last item will be the lowest 64 bits.

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

bigint pdarray constructed from uint arrays

Return type:

pdarray

Raises:
  • TypeError – Raised if any pdarray in arrays has a dtype other than uint or if the pdarrays are not the same size.

  • RuntimeError – Raised if there is a server-side error thrown

Examples

>>> a = ak.bigint_from_uint_arrays([ak.ones(5, dtype=ak.uint64), ak.arange(5, dtype=ak.uint64)])
>>> a
array(["18446744073709551616" "18446744073709551617" "18446744073709551618"
"18446744073709551619" "18446744073709551620"])
>>> a.dtype
dtype(bigint)
>>> all(a[i] == 2**64 + i for i in range(5))
True
arkouda.bitType
arkouda.bitType
arkouda.bool
arkouda.bool_scalars
arkouda.broadcast(segments: arkouda.pdarrayclass.pdarray, values: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings, size: int | numpy.int64 | numpy.uint64 = -1, permutation: arkouda.pdarrayclass.pdarray | None = None)[source]

Broadcast a dense column vector to the rows of a sparse matrix or grouped array.

Parameters:
  • segments (pdarray, int64) – Offsets of the start of each row in the sparse matrix or grouped array. Must be sorted in ascending order.

  • values (pdarray, Strings) – The values to broadcast, one per row (or group)

  • size (int) – The total number of nonzeros in the matrix. If permutation is given, this argument is ignored and the size is inferred from the permutation array.

  • permutation (pdarray, int64) – The permutation to go from the original ordering of nonzeros to the ordering grouped by row. To broadcast values back to the original ordering, this permutation will be inverted. If no permutation is supplied, it is assumed that the original nonzeros were already grouped by row. In this case, the size argument must be given.

Returns:

The broadcast values, one per nonzero

Return type:

pdarray, Strings

Raises:

ValueError

  • If segments and values are different sizes

  • If segments are empty

  • If number of nonzeros (either user-specified or inferred from permutation) is less than one

Examples

>>>
# Define a sparse matrix with 3 rows and 7 nonzeros
>>> row_starts = ak.array([0, 2, 5])
>>> nnz = 7
# Broadcast the row number to each nonzero element
>>> row_number = ak.arange(3)
>>> ak.broadcast(row_starts, row_number, nnz)
array([0 0 1 1 1 2 2])
# If the original nonzeros were in reverse order...
>>> permutation = ak.arange(6, -1, -1)
>>> ak.broadcast(row_starts, row_number, permutation=permutation)
array([2 2 1 1 1 0 0])
arkouda.broadcast(segments: arkouda.pdarrayclass.pdarray, values: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings, size: int | numpy.int64 | numpy.uint64 = -1, permutation: arkouda.pdarrayclass.pdarray | None = None)[source]

Broadcast a dense column vector to the rows of a sparse matrix or grouped array.

Parameters:
  • segments (pdarray, int64) – Offsets of the start of each row in the sparse matrix or grouped array. Must be sorted in ascending order.

  • values (pdarray, Strings) – The values to broadcast, one per row (or group)

  • size (int) – The total number of nonzeros in the matrix. If permutation is given, this argument is ignored and the size is inferred from the permutation array.

  • permutation (pdarray, int64) – The permutation to go from the original ordering of nonzeros to the ordering grouped by row. To broadcast values back to the original ordering, this permutation will be inverted. If no permutation is supplied, it is assumed that the original nonzeros were already grouped by row. In this case, the size argument must be given.

Returns:

The broadcast values, one per nonzero

Return type:

pdarray, Strings

Raises:

ValueError

  • If segments and values are different sizes

  • If segments are empty

  • If number of nonzeros (either user-specified or inferred from permutation) is less than one

Examples

>>>
# Define a sparse matrix with 3 rows and 7 nonzeros
>>> row_starts = ak.array([0, 2, 5])
>>> nnz = 7
# Broadcast the row number to each nonzero element
>>> row_number = ak.arange(3)
>>> ak.broadcast(row_starts, row_number, nnz)
array([0 0 1 1 1 2 2])
# If the original nonzeros were in reverse order...
>>> permutation = ak.arange(6, -1, -1)
>>> ak.broadcast(row_starts, row_number, permutation=permutation)
array([2 2 1 1 1 0 0])
arkouda.broadcast(segments: arkouda.pdarrayclass.pdarray, values: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings, size: int | numpy.int64 | numpy.uint64 = -1, permutation: arkouda.pdarrayclass.pdarray | None = None)[source]

Broadcast a dense column vector to the rows of a sparse matrix or grouped array.

Parameters:
  • segments (pdarray, int64) – Offsets of the start of each row in the sparse matrix or grouped array. Must be sorted in ascending order.

  • values (pdarray, Strings) – The values to broadcast, one per row (or group)

  • size (int) – The total number of nonzeros in the matrix. If permutation is given, this argument is ignored and the size is inferred from the permutation array.

  • permutation (pdarray, int64) – The permutation to go from the original ordering of nonzeros to the ordering grouped by row. To broadcast values back to the original ordering, this permutation will be inverted. If no permutation is supplied, it is assumed that the original nonzeros were already grouped by row. In this case, the size argument must be given.

Returns:

The broadcast values, one per nonzero

Return type:

pdarray, Strings

Raises:

ValueError

  • If segments and values are different sizes

  • If segments are empty

  • If number of nonzeros (either user-specified or inferred from permutation) is less than one

Examples

>>>
# Define a sparse matrix with 3 rows and 7 nonzeros
>>> row_starts = ak.array([0, 2, 5])
>>> nnz = 7
# Broadcast the row number to each nonzero element
>>> row_number = ak.arange(3)
>>> ak.broadcast(row_starts, row_number, nnz)
array([0 0 1 1 1 2 2])
# If the original nonzeros were in reverse order...
>>> permutation = ak.arange(6, -1, -1)
>>> ak.broadcast(row_starts, row_number, permutation=permutation)
array([2 2 1 1 1 0 0])
arkouda.broadcast(segments: arkouda.pdarrayclass.pdarray, values: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings, size: int | numpy.int64 | numpy.uint64 = -1, permutation: arkouda.pdarrayclass.pdarray | None = None)[source]

Broadcast a dense column vector to the rows of a sparse matrix or grouped array.

Parameters:
  • segments (pdarray, int64) – Offsets of the start of each row in the sparse matrix or grouped array. Must be sorted in ascending order.

  • values (pdarray, Strings) – The values to broadcast, one per row (or group)

  • size (int) – The total number of nonzeros in the matrix. If permutation is given, this argument is ignored and the size is inferred from the permutation array.

  • permutation (pdarray, int64) – The permutation to go from the original ordering of nonzeros to the ordering grouped by row. To broadcast values back to the original ordering, this permutation will be inverted. If no permutation is supplied, it is assumed that the original nonzeros were already grouped by row. In this case, the size argument must be given.

Returns:

The broadcast values, one per nonzero

Return type:

pdarray, Strings

Raises:

ValueError

  • If segments and values are different sizes

  • If segments are empty

  • If number of nonzeros (either user-specified or inferred from permutation) is less than one

Examples

>>>
# Define a sparse matrix with 3 rows and 7 nonzeros
>>> row_starts = ak.array([0, 2, 5])
>>> nnz = 7
# Broadcast the row number to each nonzero element
>>> row_number = ak.arange(3)
>>> ak.broadcast(row_starts, row_number, nnz)
array([0 0 1 1 1 2 2])
# If the original nonzeros were in reverse order...
>>> permutation = ak.arange(6, -1, -1)
>>> ak.broadcast(row_starts, row_number, permutation=permutation)
array([2 2 1 1 1 0 0])
arkouda.broadcast_dims(sa: Sequence[int], sb: Sequence[int]) Tuple[int, Ellipsis][source]

Algorithm to determine shape of broadcasted PD array given two array shapes

see: https://data-apis.org/array-api/latest/API_specification/broadcasting.html#algorithm

arkouda.broadcast_to_shape(pda: pdarray, shape: Tuple[int, Ellipsis]) pdarray[source]

expand an array’s rank to the specified shape using broadcasting

arkouda.cast(pda: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical, dt: numpy.dtype | type | str | arkouda.dtypes.BigInt, errors: ErrorMode = ErrorMode.strict) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical | Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray][source]

Cast an array to another dtype.

Parameters:
  • pda (pdarray or Strings) – The array of values to cast

  • dt (np.dtype, type, or str) – The target dtype to cast values to

  • errors ({strict, ignore, return_validity}) –

    Controls how errors are handled when casting strings to a numeric type (ignored for casts from numeric types).

    • strict: raise RuntimeError if any string cannot be converted

    • ignore: never raise an error. Uninterpretable strings get

      converted to NaN (float64), -2**63 (int64), zero (uint64 and uint8), or False (bool)

    • return_validity: in addition to returning the same output as “ignore”, also return a bool array indicating where the cast was successful.

Returns:

  • pdarray or Strings – Array of values cast to desired dtype

  • [validity (pdarray(bool)]) – If errors=”return_validity” and input is Strings, a second array is returned with True where the cast succeeded and False where it failed.

Notes

The cast is performed according to Chapel’s casting rules and is NOT safe from overflows or underflows. The user must ensure that the target dtype has the precision and capacity to hold the desired result.

Examples

>>> ak.cast(ak.linspace(1.0,5.0,5), dt=ak.int64)
array([1, 2, 3, 4, 5])
>>> ak.cast(ak.arange(0,5), dt=ak.float64).dtype
dtype('float64')
>>> ak.cast(ak.arange(0,5), dt=ak.bool)
array([False, True, True, True, True])
>>> ak.cast(ak.linspace(0,4,5), dt=ak.bool)
array([False, True, True, True, True])
arkouda.cast(pda: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical, dt: numpy.dtype | type | str | arkouda.dtypes.BigInt, errors: ErrorMode = ErrorMode.strict) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical | Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray][source]

Cast an array to another dtype.

Parameters:
  • pda (pdarray or Strings) – The array of values to cast

  • dt (np.dtype, type, or str) – The target dtype to cast values to

  • errors ({strict, ignore, return_validity}) –

    Controls how errors are handled when casting strings to a numeric type (ignored for casts from numeric types).

    • strict: raise RuntimeError if any string cannot be converted

    • ignore: never raise an error. Uninterpretable strings get

      converted to NaN (float64), -2**63 (int64), zero (uint64 and uint8), or False (bool)

    • return_validity: in addition to returning the same output as “ignore”, also return a bool array indicating where the cast was successful.

Returns:

  • pdarray or Strings – Array of values cast to desired dtype

  • [validity (pdarray(bool)]) – If errors=”return_validity” and input is Strings, a second array is returned with True where the cast succeeded and False where it failed.

Notes

The cast is performed according to Chapel’s casting rules and is NOT safe from overflows or underflows. The user must ensure that the target dtype has the precision and capacity to hold the desired result.

Examples

>>> ak.cast(ak.linspace(1.0,5.0,5), dt=ak.int64)
array([1, 2, 3, 4, 5])
>>> ak.cast(ak.arange(0,5), dt=ak.float64).dtype
dtype('float64')
>>> ak.cast(ak.arange(0,5), dt=ak.bool)
array([False, True, True, True, True])
>>> ak.cast(ak.linspace(0,4,5), dt=ak.bool)
array([False, True, True, True, True])
arkouda.ceil(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise ceiling of the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing ceiling values of the input array elements

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Examples

>>> ak.ceil(ak.linspace(1.1,5.5,5))
array([2, 3, 4, 5, 6])
arkouda.check_np_dtype(dt: numpy.dtype | BigInt) None[source]

Assert that numpy dtype dt is one of the dtypes supported by arkouda, otherwise raise TypeError.

Raises:

TypeError – Raised if the dtype is not in supported dtypes or if dt is not a np.dtype

arkouda.chisquare(f_obs, f_exp=None, ddof=0)[source]

Computes the chi square statistic and p-value.

Parameters:
  • f_obs (pdarray) – The observed frequency.

  • f_exp (pdarray, default = None) – The expected frequency.

  • ddof (int) – The delta degrees of freedom.

Return type:

arkouda.akstats.Power_divergenceResult

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> from arkouda.akstats import chisquare
>>> chisquare(ak.array([10, 20, 30, 10]), ak.array([10, 30, 20, 10]))
Power_divergenceResult(statistic=8.333333333333334, pvalue=0.03960235520756414)

See also

scipy.stats.chisquare, arkouda.akstats.power_divergence

References

[1] “Chi-squared test”, https://en.wikipedia.org/wiki/Chi-squared_test

[2] “scipy.stats.chisquare”, https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chisquare.html

arkouda.clear() None[source]

Send a clear message to clear all unregistered data from the server symbol table

Return type:

None

Raises:

RuntimeError – Raised if there is a server-side error in executing clear request

arkouda.clip(pda: arkouda.pdarrayclass.pdarray, lo: arkouda.dtypes.numeric_scalars | arkouda.pdarrayclass.pdarray, hi: arkouda.dtypes.numeric_scalars | arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Clip (limit) the values in an array to a given range [lo,hi]

Given an array a, values outside the range are clipped to the range edges, such that all elements lie in the range.

There is no check to enforce that lo < hi. If lo > hi, the corresponding value of the array will be set to hi.

If lo or hi (or both) are pdarrays, the check is by pairwise elements. See examples.

Parameters:
  • pda (pdarray, int64 or float64) – the array of values to clip

  • lo (scalar or pdarray, int64 or float64) – the lower value of the clipping range

  • hi (scalar or pdarray, int64 or float64) – the higher value of the clipping range

  • pdarrays (If lo or hi (or both) are) – See examples.

  • elements. (the check is by pairwise) – See examples.

Returns:

A pdarray matching pda, except that element x remains x if lo <= x <= hi,

or becomes lo if x < lo, or becomes hi if x > hi.

Return type:

arkouda.pdarrayclass.pdarray

Examples

>>> a = ak.array([1,2,3,4,5,6,7,8,9,10])
>>> ak.clip(a,3,8)
array([3,3,3,4,5,6,7,8,8,8])
>>> ak.clip(a,3,8.0)
array([3.00000000000000000 3.00000000000000000 3.00000000000000000 4.00000000000000000
       5.00000000000000000 6.00000000000000000 7.00000000000000000 8.00000000000000000
       8.00000000000000000 8.00000000000000000])
>>> ak.clip(a,None,7)
array([1,2,3,4,5,6,7,7,7,7])
>>> ak.clip(a,5,None)
array([5,5,5,5,5,6,7,8,9,10])
>>> ak.clip(a,None,None)
ValueError : either min or max must be supplied
>>> ak.clip(a,ak.array([2,2,3,3,8,8,5,5,6,6],8))
array([2,2,3,4,8,8,7,8,8,8])
>>> ak.clip(a,4,ak.array([10,9,8,7,6,5,5,5,5,5]))
array([4,4,4,4,5,5,5,5,5,5])

Notes

Either lo or hi may be None, but not both. If lo > hi, all x = hi. If all inputs are int64, output is int64, but if any input is float64, output is float64.

Raises:

ValueError – Raised if both lo and hi are None

arkouda.clz(pda: pdarray) pdarray[source]

Count leading zeros for each integer in an array.

Parameters:

pda (pdarray, int64, uint64, bigint) – Input array (must be integral).

Returns:

lz – The number of leading zeros of each element.

Return type:

pdarray

Raises:

TypeError – If input array is not int64, uint64, or bigint

Examples

>>> A = ak.arange(10)
>>> ak.clz(A)
array([64, 63, 62, 62, 61, 61, 61, 61, 60, 60])
arkouda.coargsort(arrays: Sequence[arkouda.strings.Strings | arkouda.pdarrayclass.pdarray | arkouda.categorical.Categorical], algorithm: SortingAlgorithm = SortingAlgorithm.RadixSortLSD) arkouda.pdarrayclass.pdarray[source]

Return the permutation that groups the rows (left-to-right), if the input arrays are treated as columns. The permutation sorts numeric columns, but not strings/Categoricals – strings/Categoricals are grouped, but not ordered.

Parameters:

arrays (Sequence[Union[Strings, pdarray, Categorical]]) – The columns (int64, uint64, float64, Strings, or Categorical) to sort by row

Returns:

The indices that permute the rows to grouped order

Return type:

pdarray, int64

Raises:

ValueError – Raised if the pdarrays are not of the same size or if the parameter is not an Iterable containing pdarrays, Strings, or Categoricals

See also

argsort

Notes

Uses a least-significant-digit radix sort, which is stable and resilient to non-uniformity in data but communication intensive. Starts with the last array and moves forward. This sort operates directly on numeric types, but for Strings, it operates on a hash. Thus, while grouping of equivalent strings is guaranteed, lexicographic ordering of the groups is not. For Categoricals, coargsort sorts based on Categorical.codes which guarantees grouping of equivalent categories but not lexicographic ordering of those groups.

Examples

>>> a = ak.array([0, 1, 0, 1])
>>> b = ak.array([1, 1, 0, 0])
>>> perm = ak.coargsort([a, b])
>>> perm
array([2, 0, 3, 1])
>>> a[perm]
array([0, 0, 1, 1])
>>> b[perm]
array([0, 1, 0, 1])
arkouda.coargsort(arrays: Sequence[arkouda.strings.Strings | arkouda.pdarrayclass.pdarray | arkouda.categorical.Categorical], algorithm: SortingAlgorithm = SortingAlgorithm.RadixSortLSD) arkouda.pdarrayclass.pdarray[source]

Return the permutation that groups the rows (left-to-right), if the input arrays are treated as columns. The permutation sorts numeric columns, but not strings/Categoricals – strings/Categoricals are grouped, but not ordered.

Parameters:

arrays (Sequence[Union[Strings, pdarray, Categorical]]) – The columns (int64, uint64, float64, Strings, or Categorical) to sort by row

Returns:

The indices that permute the rows to grouped order

Return type:

pdarray, int64

Raises:

ValueError – Raised if the pdarrays are not of the same size or if the parameter is not an Iterable containing pdarrays, Strings, or Categoricals

See also

argsort

Notes

Uses a least-significant-digit radix sort, which is stable and resilient to non-uniformity in data but communication intensive. Starts with the last array and moves forward. This sort operates directly on numeric types, but for Strings, it operates on a hash. Thus, while grouping of equivalent strings is guaranteed, lexicographic ordering of the groups is not. For Categoricals, coargsort sorts based on Categorical.codes which guarantees grouping of equivalent categories but not lexicographic ordering of those groups.

Examples

>>> a = ak.array([0, 1, 0, 1])
>>> b = ak.array([1, 1, 0, 0])
>>> perm = ak.coargsort([a, b])
>>> perm
array([2, 0, 3, 1])
>>> a[perm]
array([0, 0, 1, 1])
>>> b[perm]
array([0, 1, 0, 1])
arkouda.coargsort(arrays: Sequence[arkouda.strings.Strings | arkouda.pdarrayclass.pdarray | arkouda.categorical.Categorical], algorithm: SortingAlgorithm = SortingAlgorithm.RadixSortLSD) arkouda.pdarrayclass.pdarray[source]

Return the permutation that groups the rows (left-to-right), if the input arrays are treated as columns. The permutation sorts numeric columns, but not strings/Categoricals – strings/Categoricals are grouped, but not ordered.

Parameters:

arrays (Sequence[Union[Strings, pdarray, Categorical]]) – The columns (int64, uint64, float64, Strings, or Categorical) to sort by row

Returns:

The indices that permute the rows to grouped order

Return type:

pdarray, int64

Raises:

ValueError – Raised if the pdarrays are not of the same size or if the parameter is not an Iterable containing pdarrays, Strings, or Categoricals

See also

argsort

Notes

Uses a least-significant-digit radix sort, which is stable and resilient to non-uniformity in data but communication intensive. Starts with the last array and moves forward. This sort operates directly on numeric types, but for Strings, it operates on a hash. Thus, while grouping of equivalent strings is guaranteed, lexicographic ordering of the groups is not. For Categoricals, coargsort sorts based on Categorical.codes which guarantees grouping of equivalent categories but not lexicographic ordering of those groups.

Examples

>>> a = ak.array([0, 1, 0, 1])
>>> b = ak.array([1, 1, 0, 0])
>>> perm = ak.coargsort([a, b])
>>> perm
array([2, 0, 3, 1])
>>> a[perm]
array([0, 0, 1, 1])
>>> b[perm]
array([0, 1, 0, 1])
arkouda.complex128
arkouda.complex64
arkouda.compute_join_size(a: arkouda.pdarrayclass.pdarray, b: arkouda.pdarrayclass.pdarray) Tuple[int, int][source]

Compute the internal size of a hypothetical join between a and b. Returns both the number of elements and number of bytes required for the join.

arkouda.concatenate(arrays: Sequence[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | Categorical], ordered: bool = True) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | Categorical[source]

Concatenate a list or tuple of pdarray or Strings objects into one pdarray or Strings object, respectively.

Parameters:
  • arrays (Sequence[Union[pdarray,Strings,Categorical]]) – The arrays to concatenate. Must all have same dtype.

  • ordered (bool) – If True (default), the arrays will be appended in the order given. If False, array data may be interleaved in blocks, which can greatly improve performance but results in non-deterministic ordering of elements.

Returns:

Single pdarray or Strings object containing all values, returned in the original order

Return type:

Union[pdarray,Strings,Categorical]

Raises:
  • ValueError – Raised if arrays is empty or if 1..n pdarrays have differing dtypes

  • TypeError – Raised if arrays is not a pdarrays or Strings python Sequence such as a list or tuple

  • RuntimeError – Raised if 1..n array elements are dtypes for which concatenate has not been implemented.

Examples

>>> ak.concatenate([ak.array([1, 2, 3]), ak.array([4, 5, 6])])
array([1, 2, 3, 4, 5, 6])
>>> ak.concatenate([ak.array([True,False,True]),ak.array([False,True,True])])
array([True, False, True, False, True, True])
>>> ak.concatenate([ak.array(['one','two']),ak.array(['three','four','five'])])
array(['one', 'two', 'three', 'four', 'five'])
arkouda.concatenate(arrays: Sequence[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | Categorical], ordered: bool = True) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | Categorical[source]

Concatenate a list or tuple of pdarray or Strings objects into one pdarray or Strings object, respectively.

Parameters:
  • arrays (Sequence[Union[pdarray,Strings,Categorical]]) – The arrays to concatenate. Must all have same dtype.

  • ordered (bool) – If True (default), the arrays will be appended in the order given. If False, array data may be interleaved in blocks, which can greatly improve performance but results in non-deterministic ordering of elements.

Returns:

Single pdarray or Strings object containing all values, returned in the original order

Return type:

Union[pdarray,Strings,Categorical]

Raises:
  • ValueError – Raised if arrays is empty or if 1..n pdarrays have differing dtypes

  • TypeError – Raised if arrays is not a pdarrays or Strings python Sequence such as a list or tuple

  • RuntimeError – Raised if 1..n array elements are dtypes for which concatenate has not been implemented.

Examples

>>> ak.concatenate([ak.array([1, 2, 3]), ak.array([4, 5, 6])])
array([1, 2, 3, 4, 5, 6])
>>> ak.concatenate([ak.array([True,False,True]),ak.array([False,True,True])])
array([True, False, True, False, True, True])
>>> ak.concatenate([ak.array(['one','two']),ak.array(['three','four','five'])])
array(['one', 'two', 'three', 'four', 'five'])
arkouda.concatenate(arrays: Sequence[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | Categorical], ordered: bool = True) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | Categorical[source]

Concatenate a list or tuple of pdarray or Strings objects into one pdarray or Strings object, respectively.

Parameters:
  • arrays (Sequence[Union[pdarray,Strings,Categorical]]) – The arrays to concatenate. Must all have same dtype.

  • ordered (bool) – If True (default), the arrays will be appended in the order given. If False, array data may be interleaved in blocks, which can greatly improve performance but results in non-deterministic ordering of elements.

Returns:

Single pdarray or Strings object containing all values, returned in the original order

Return type:

Union[pdarray,Strings,Categorical]

Raises:
  • ValueError – Raised if arrays is empty or if 1..n pdarrays have differing dtypes

  • TypeError – Raised if arrays is not a pdarrays or Strings python Sequence such as a list or tuple

  • RuntimeError – Raised if 1..n array elements are dtypes for which concatenate has not been implemented.

Examples

>>> ak.concatenate([ak.array([1, 2, 3]), ak.array([4, 5, 6])])
array([1, 2, 3, 4, 5, 6])
>>> ak.concatenate([ak.array([True,False,True]),ak.array([False,True,True])])
array([True, False, True, False, True, True])
>>> ak.concatenate([ak.array(['one','two']),ak.array(['three','four','five'])])
array(['one', 'two', 'three', 'four', 'five'])
arkouda.convert_if_categorical(values)[source]

Convert a Categorical array to Strings for display

arkouda.corr(x: pdarray, y: pdarray) numpy.float64[source]

Return the correlation between x and y

Parameters:
  • x (pdarray) – One of the pdarrays used to calculate correlation

  • y (pdarray) – One of the pdarrays used to calculate correlation

Returns:

The scalar correlation of the two pdarrays

Return type:

np.float64

Raises:
  • TypeError – Raised if x or y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

See also

std, cov

Notes

The correlation is calculated by cov(x, y) / (x.std(ddof=1) * y.std(ddof=1))

arkouda.cos(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Return the element-wise cosine of the array.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the cosine will be applied to the corresponding value. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing cosine for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.cosh(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Return the element-wise hyperbolic cosine of the array.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the hyperbolic cosine will be applied to the corresponding value. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing hyperbolic cosine for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.cov(x: pdarray, y: pdarray) numpy.float64[source]

Return the covariance of x and y

Parameters:
  • x (pdarray) – One of the pdarrays used to calculate covariance

  • y (pdarray) – One of the pdarrays used to calculate covariance

Returns:

The scalar covariance of the two pdarrays

Return type:

np.float64

Raises:
  • TypeError – Raised if x or y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

See also

mean, var

Notes

The covariance is calculated by cov = ((x - x.mean()) * (y - y.mean())).sum() / (x.size - 1).

arkouda.create_pdarray(repMsg: str, max_bits=None) pdarray[source]

Return a pdarray instance pointing to an array created by the arkouda server. The user should not call this function directly.

Parameters:

repMsg (str) – space-delimited string containing the pdarray name, datatype, size dimension, shape,and itemsize

Returns:

A pdarray with the same attributes and data as the pdarray; on GPU

Return type:

pdarray

Raises:
  • ValueError – If there’s an error in parsing the repMsg parameter into the six values needed to create the pdarray instance

  • RuntimeError – Raised if a server-side error is thrown in the process of creating the pdarray instance

arkouda.create_pdarray(repMsg: str, max_bits=None) pdarray[source]

Return a pdarray instance pointing to an array created by the arkouda server. The user should not call this function directly.

Parameters:

repMsg (str) – space-delimited string containing the pdarray name, datatype, size dimension, shape,and itemsize

Returns:

A pdarray with the same attributes and data as the pdarray; on GPU

Return type:

pdarray

Raises:
  • ValueError – If there’s an error in parsing the repMsg parameter into the six values needed to create the pdarray instance

  • RuntimeError – Raised if a server-side error is thrown in the process of creating the pdarray instance

arkouda.create_pdarray(repMsg: str, max_bits=None) pdarray[source]

Return a pdarray instance pointing to an array created by the arkouda server. The user should not call this function directly.

Parameters:

repMsg (str) – space-delimited string containing the pdarray name, datatype, size dimension, shape,and itemsize

Returns:

A pdarray with the same attributes and data as the pdarray; on GPU

Return type:

pdarray

Raises:
  • ValueError – If there’s an error in parsing the repMsg parameter into the six values needed to create the pdarray instance

  • RuntimeError – Raised if a server-side error is thrown in the process of creating the pdarray instance

arkouda.create_pdarray(repMsg: str, max_bits=None) pdarray[source]

Return a pdarray instance pointing to an array created by the arkouda server. The user should not call this function directly.

Parameters:

repMsg (str) – space-delimited string containing the pdarray name, datatype, size dimension, shape,and itemsize

Returns:

A pdarray with the same attributes and data as the pdarray; on GPU

Return type:

pdarray

Raises:
  • ValueError – If there’s an error in parsing the repMsg parameter into the six values needed to create the pdarray instance

  • RuntimeError – Raised if a server-side error is thrown in the process of creating the pdarray instance

arkouda.ctz(pda: pdarray) pdarray[source]

Count trailing zeros for each integer in an array.

Parameters:

pda (pdarray, int64, uint64, bigint) – Input array (must be integral).

Returns:

lz – The number of trailing zeros of each element.

Return type:

pdarray

Notes

ctz(0) is defined to be zero.

Raises:

TypeError – If input array is not int64, uint64, or bigint

Examples

>>> A = ak.arange(10)
>>> ak.ctz(A)
array([0, 0, 1, 0, 2, 0, 1, 0, 3, 0])
arkouda.cumprod(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the cumulative product over the array.

The product is inclusive, such that the i th element of the result is the product of elements up to and including i.

Parameters:

pda (pdarray)

Returns:

A pdarray containing cumulative products for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Examples

>>> ak.cumprod(ak.arange(1,5))
array([1, 2, 6, 24]))
>>> ak.cumprod(ak.uniform(5,1.0,5.0))
array([1.5728783400481925, 7.0472855509390593, 33.78523998586553,
       134.05309592737584, 450.21589865655358])
arkouda.cumsum(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the cumulative sum over the array.

The sum is inclusive, such that the i th element of the result is the sum of elements up to and including i.

Parameters:

pda (pdarray)

Returns:

A pdarray containing cumulative sums for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Examples

>>> ak.cumsum(ak.arange([1,5]))
array([1, 3, 6])
>>> ak.cumsum(ak.uniform(5,1.0,5.0))
array([3.1598310770203937, 5.4110385860243131, 9.1622479306453748,
       12.710615785506533, 13.945880905466208])
>>> ak.cumsum(ak.randint(0, 1, 5, dtype=ak.bool))
array([0, 1, 1, 2, 3])
arkouda.cumsum(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the cumulative sum over the array.

The sum is inclusive, such that the i th element of the result is the sum of elements up to and including i.

Parameters:

pda (pdarray)

Returns:

A pdarray containing cumulative sums for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Examples

>>> ak.cumsum(ak.arange([1,5]))
array([1, 3, 6])
>>> ak.cumsum(ak.uniform(5,1.0,5.0))
array([3.1598310770203937, 5.4110385860243131, 9.1622479306453748,
       12.710615785506533, 13.945880905466208])
>>> ak.cumsum(ak.randint(0, 1, 5, dtype=ak.bool))
array([0, 1, 1, 2, 3])
arkouda.date_operators(cls)[source]
arkouda.date_range(start=None, end=None, periods=None, freq=None, tz=None, normalize=False, name=None, closed=None, inclusive='both', **kwargs)[source]

Creates a fixed frequency Datetime range. Alias for ak.Datetime(pd.date_range(args)). Subject to size limit imposed by client.maxTransferBytes.

Parameters:
  • start (str or datetime-like, optional) – Left bound for generating dates.

  • end (str or datetime-like, optional) – Right bound for generating dates.

  • periods (int, optional) – Number of periods to generate.

  • freq (str or DateOffset, default 'D') – Frequency strings can have multiples, e.g. ‘5H’. See timeseries.offset_aliases for a list of frequency aliases.

  • tz (str or tzinfo, optional) – Time zone name for returning localized DatetimeIndex, for example ‘Asia/Hong_Kong’. By default, the resulting DatetimeIndex is timezone-naive.

  • normalize (bool, default False) – Normalize start/end dates to midnight before generating date range.

  • name (str, default None) – Name of the resulting DatetimeIndex.

  • closed ({None, 'left', 'right'}, optional) – Make the interval closed with respect to the given frequency to the ‘left’, ‘right’, or both sides (None, the default). Deprecated

  • inclusive ({"both", "neither", "left", "right"}, default "both") – Include boundaries. Whether to set each bound as closed or open.

  • **kwargs – For compatibility. Has no effect on the result.

Returns:

rng

Return type:

DatetimeIndex

Notes

Of the four parameters start, end, periods, and freq, exactly three must be specified. If freq is omitted, the resulting DatetimeIndex will have periods linearly spaced elements between start and end (closed on both sides).

To learn more about the frequency strings, please see this link.

arkouda.date_range(start=None, end=None, periods=None, freq=None, tz=None, normalize=False, name=None, closed=None, inclusive='both', **kwargs)[source]

Creates a fixed frequency Datetime range. Alias for ak.Datetime(pd.date_range(args)). Subject to size limit imposed by client.maxTransferBytes.

Parameters:
  • start (str or datetime-like, optional) – Left bound for generating dates.

  • end (str or datetime-like, optional) – Right bound for generating dates.

  • periods (int, optional) – Number of periods to generate.

  • freq (str or DateOffset, default 'D') – Frequency strings can have multiples, e.g. ‘5H’. See timeseries.offset_aliases for a list of frequency aliases.

  • tz (str or tzinfo, optional) – Time zone name for returning localized DatetimeIndex, for example ‘Asia/Hong_Kong’. By default, the resulting DatetimeIndex is timezone-naive.

  • normalize (bool, default False) – Normalize start/end dates to midnight before generating date range.

  • name (str, default None) – Name of the resulting DatetimeIndex.

  • closed ({None, 'left', 'right'}, optional) – Make the interval closed with respect to the given frequency to the ‘left’, ‘right’, or both sides (None, the default). Deprecated

  • inclusive ({"both", "neither", "left", "right"}, default "both") – Include boundaries. Whether to set each bound as closed or open.

  • **kwargs – For compatibility. Has no effect on the result.

Returns:

rng

Return type:

DatetimeIndex

Notes

Of the four parameters start, end, periods, and freq, exactly three must be specified. If freq is omitted, the resulting DatetimeIndex will have periods linearly spaced elements between start and end (closed on both sides).

To learn more about the frequency strings, please see this link.

arkouda.deg2rad(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Converts angles element-wise from degrees to radians.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the corresponding value will be converted from degrees to radians. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing an angle converted to radians, from degrees, for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.disableVerbose(logLevel: LogLevel = LogLevel.INFO) None[source]

Disables verbose logging (DEBUG log level) for all ArkoudaLoggers, setting the log level for each to the logLevel parameter

Parameters:

logLevel (LogLevel) – The new log level, defaultts to LogLevel.INFO

Raises:

TypeError – Raised if logLevel is not a LogLevel enum

arkouda.divmod(x: arkouda.dtypes.numeric_scalars | pdarray, y: arkouda.dtypes.numeric_scalars | pdarray, where: bool | pdarray = True) Tuple[pdarray, pdarray][source]
Parameters:
  • x (numeric_scalars(float_scalars, int_scalars) or pdarray) – The dividend array, the values that will be the numerator of the floordivision and will be acted on by the bases for modular division.

  • y (numeric_scalars(float_scalars, int_scalars) or pdarray) – The divisor array, the values that will be the denominator of the division and will be the bases for the modular division.

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the corresponding value will be divided using floor and modular division. Elsewhere, it will retain its original value. Default set to True.

Returns:

Returns a tuple that contains quotient and remainder of the division

Return type:

(pdarray, pdarray)

Raises:
  • TypeError – At least one entry must be a pdarray

  • ValueError – If both inputs are both pdarrays, their size must match

  • ZeroDivisionError – No entry in y is allowed to be 0, to prevent division by zero

Notes

The div is calculated by x // y The mod is calculated by x % y

Examples

>>> x = ak.arange(5, 10)
>>> y = ak.array([2, 1, 4, 5, 8])
>>> ak.divmod(x,y)
(array([2 6 1 1 1]), array([1 0 3 3 1]))
>>> ak.divmod(x,y, x % 2 == 0)
(array([5 6 7 1 9]), array([5 0 7 3 9]))
arkouda.dot(pda1: numpy.int64 | numpy.float64 | numpy.uint64 | pdarray, pda2: numpy.int64 | numpy.float64 | numpy.uint64 | pdarray) numpy.int64 | numpy.float64 | numpy.uint64 | pdarray[source]

Returns the sum of the elementwise product of two arrays of the same size (the dot product) or the product of a singleton element and an array.

Parameters:
  • pda1 (Union[numeric_scalars, pdarray])

  • pda2 (Union[numeric_scalars, pdarray])

Returns:

The sum of the elementwise product pda1 and pda2 or the product of a singleton element and an array.

Return type:

Union[numeric_scalars, pdarray]

Raises:

ValueError – Raised if the size of pda1 is not the same as pda2

Examples

>>> x = ak.array([2, 3])
>>> y = ak.array([4, 5])
>>> ak.dot(x,y)
23
>>> ak.dot(x,2)
array([4 6])
arkouda.dtype(x)[source]
arkouda.enableVerbose() None[source]

Enables verbose logging (DEBUG log level) for all ArkoudaLoggers

arkouda.exp(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise exponential of the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing exponential values of the input array elements

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Examples

>>> ak.exp(ak.arange(1,5))
array([2.7182818284590451, 7.3890560989306504, 20.085536923187668, 54.598150033144236])
>>> ak.exp(ak.uniform(5,1.0,5.0))
array([11.84010843172504, 46.454368507659211, 5.5571769623557188,
       33.494295836924771, 13.478894913238722])
arkouda.expm1(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise exponential of the array minus one.

Parameters:

pda (pdarray)

Returns:

A pdarray containing exponential values of the input array elements minus one

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Examples

>>> ak.exp1m(ak.arange(1,5))
array([1.7182818284590451, 6.3890560989306504, 19.085536923187668, 53.598150033144236])
>>> ak.exp1m(ak.uniform(5,1.0,5.0))
array([10.84010843172504, 45.454368507659211, 4.5571769623557188,
       32.494295836924771, 12.478894913238722])
arkouda.export(read_path: str, dataset_name: str = 'ak_data', write_file: str | None = None, return_obj: bool = True, index: bool = False)[source]

Export data from Arkouda file (Parquet/HDF5) to Pandas object or file formatted to be readable by Pandas

Parameters:
  • read_path (str) – path to file where arkouda data is stored.

  • dataset_name (str) – name to store dataset under

  • index (bool) – Default False. When True, maintain the indexes loaded from the pandas file

  • write_file (str, optional) – path to file to write pandas formatted data to. Only write the file if this is set

  • return_obj (bool, optional) – Default True. When True return the Pandas DataFrame object, otherwise return None

Raises:

RuntimeError

  • Unsupported file type

Returns:

When return_obj=True

Return type:

pd.DataFrame

See also

pandas.DataFrame.to_parquet, pandas.DataFrame.to_hdf, pandas.DataFrame.read_parquet, pandas.DataFrame.read_hdf, ak.import_data

Notes

  • If Arkouda file is exported for pandas, the format will not change. This mean parquet files will remain parquet and hdf5 will remain hdf5.

  • Export can only be performed from hdf5 or parquet files written by Arkouda. The result will be the same file type, but formatted to be read by Pandas.

arkouda.find(query, space)[source]

Return indices of query items in a search list of items (-1 if not found).

Parameters:
  • query ((sequence of) array-like) – The items to search for. If multiple arrays, each “row” is an item.

  • space ((sequence of) array-like) – The set of items in which to search. Must have same shape/dtype as query.

Returns:

indices – For each item in query, its index in space or -1 if not found.

Return type:

pdarray, int64

arkouda.float32
arkouda.float64
arkouda.float_scalars
arkouda.floor(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise floor of the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing floor values of the input array elements

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Examples

>>> ak.floor(ak.linspace(1.1,5.5,5))
array([1, 2, 3, 4, 5])
arkouda.fmod(dividend: pdarray | arkouda.dtypes.numeric_scalars, divisor: pdarray | arkouda.dtypes.numeric_scalars) pdarray[source]

Returns the element-wise remainder of division.

It is equivalent to np.fmod, the remainder has the same sign as the dividend.

Parameters:
  • dividend (numeric scalars or pdarray) – The array being acted on by the bases for the modular division.

  • divisor (numeric scalars or pdarray) – The array that will be the bases for the modular division.

Returns:

Returns an array that contains the element-wise remainder of division.

Return type:

pdarray

arkouda.from_series(series: pandas.Series, dtype: type | str | None = None) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Converts a Pandas Series to an Arkouda pdarray or Strings object. If dtype is None, the dtype is inferred from the Pandas Series. Otherwise, the dtype parameter is set if the dtype of the Pandas Series is to be overridden or is unknown (for example, in situations where the Series dtype is object).

Parameters:
  • series (Pandas Series) – The Pandas Series with a dtype of bool, float64, int64, or string

  • dtype (Optional[type]) – The valid dtype types are np.bool, np.float64, np.int64, and np.str

Return type:

Union[pdarray,Strings]

Raises:
  • TypeError – Raised if series is not a Pandas Series object

  • ValueError – Raised if the Series dtype is not bool, float64, int64, string, datetime, or timedelta

Examples

>>> ak.from_series(pd.Series(np.random.randint(0,10,5)))
array([9, 0, 4, 7, 9])
>>> ak.from_series(pd.Series(['1', '2', '3', '4', '5']),dtype=np.int64)
array([1, 2, 3, 4, 5])
>>> ak.from_series(pd.Series(np.random.uniform(low=0.0,high=1.0,size=3)))
array([0.57600036956445599, 0.41619265571741659, 0.6615356693784662])
>>> ak.from_series(pd.Series(['0.57600036956445599', '0.41619265571741659',
                   '0.6615356693784662']), dtype=np.float64)
array([0.57600036956445599, 0.41619265571741659, 0.6615356693784662])
>>> ak.from_series(pd.Series(np.random.choice([True, False],size=5)))
array([True, False, True, True, True])
>>> ak.from_series(pd.Series(['True', 'False', 'False', 'True', 'True']), dtype=np.bool)
array([True, True, True, True, True])
>>> ak.from_series(pd.Series(['a', 'b', 'c', 'd', 'e'], dtype="string"))
array(['a', 'b', 'c', 'd', 'e'])
>>> ak.from_series(pd.Series(['a', 'b', 'c', 'd', 'e']),dtype=np.str)
array(['a', 'b', 'c', 'd', 'e'])
>>> ak.from_series(pd.Series(pd.to_datetime(['1/1/2018', np.datetime64('2018-01-01')])))
array([1514764800000000000, 1514764800000000000])

Notes

The supported datatypes are bool, float64, int64, string, and datetime64[ns]. The data type is either inferred from the the Series or is set via the dtype parameter.

Series of datetime or timedelta are converted to Arkouda arrays of dtype int64 (nanoseconds)

A Pandas Series containing strings has a dtype of object. Arkouda assumes the Series contains strings and sets the dtype to str

arkouda.from_series(series: pandas.Series, dtype: type | str | None = None) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Converts a Pandas Series to an Arkouda pdarray or Strings object. If dtype is None, the dtype is inferred from the Pandas Series. Otherwise, the dtype parameter is set if the dtype of the Pandas Series is to be overridden or is unknown (for example, in situations where the Series dtype is object).

Parameters:
  • series (Pandas Series) – The Pandas Series with a dtype of bool, float64, int64, or string

  • dtype (Optional[type]) – The valid dtype types are np.bool, np.float64, np.int64, and np.str

Return type:

Union[pdarray,Strings]

Raises:
  • TypeError – Raised if series is not a Pandas Series object

  • ValueError – Raised if the Series dtype is not bool, float64, int64, string, datetime, or timedelta

Examples

>>> ak.from_series(pd.Series(np.random.randint(0,10,5)))
array([9, 0, 4, 7, 9])
>>> ak.from_series(pd.Series(['1', '2', '3', '4', '5']),dtype=np.int64)
array([1, 2, 3, 4, 5])
>>> ak.from_series(pd.Series(np.random.uniform(low=0.0,high=1.0,size=3)))
array([0.57600036956445599, 0.41619265571741659, 0.6615356693784662])
>>> ak.from_series(pd.Series(['0.57600036956445599', '0.41619265571741659',
                   '0.6615356693784662']), dtype=np.float64)
array([0.57600036956445599, 0.41619265571741659, 0.6615356693784662])
>>> ak.from_series(pd.Series(np.random.choice([True, False],size=5)))
array([True, False, True, True, True])
>>> ak.from_series(pd.Series(['True', 'False', 'False', 'True', 'True']), dtype=np.bool)
array([True, True, True, True, True])
>>> ak.from_series(pd.Series(['a', 'b', 'c', 'd', 'e'], dtype="string"))
array(['a', 'b', 'c', 'd', 'e'])
>>> ak.from_series(pd.Series(['a', 'b', 'c', 'd', 'e']),dtype=np.str)
array(['a', 'b', 'c', 'd', 'e'])
>>> ak.from_series(pd.Series(pd.to_datetime(['1/1/2018', np.datetime64('2018-01-01')])))
array([1514764800000000000, 1514764800000000000])

Notes

The supported datatypes are bool, float64, int64, string, and datetime64[ns]. The data type is either inferred from the the Series or is set via the dtype parameter.

Series of datetime or timedelta are converted to Arkouda arrays of dtype int64 (nanoseconds)

A Pandas Series containing strings has a dtype of object. Arkouda assumes the Series contains strings and sets the dtype to str

arkouda.full(size: arkouda.dtypes.int_scalars | str, fill_value: arkouda.dtypes.numeric_scalars | str, dtype: numpy.dtype | type | str | arkouda.dtypes.BigInt = float64, max_bits: int | None = None) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Create a pdarray filled with fill_value.

Parameters:
  • size (int_scalars) – Size of the array (only rank-1 arrays supported)

  • fill_value (int_scalars) – Value with which the array will be filled

  • dtype (all_scalars) – Resulting array type, default float64

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

array of the requested size and dtype filled with fill_value

Return type:

pdarray or Strings

Raises:

TypeError – Raised if the supplied dtype is not supported or if the size parameter is neither an int nor a str that is parseable to an int.

See also

zeros, ones

Examples

>>> ak.full(5, 7, dtype=ak.int64)
array([7, 7, 7, 7, 7])
>>> ak.full(5, 9, dtype=ak.float64)
array([9, 9, 9, 9, 9])
>>> ak.full(5, 5, dtype=ak.bool)
array([True, True, True, True, True])
arkouda.full(size: arkouda.dtypes.int_scalars | str, fill_value: arkouda.dtypes.numeric_scalars | str, dtype: numpy.dtype | type | str | arkouda.dtypes.BigInt = float64, max_bits: int | None = None) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings[source]

Create a pdarray filled with fill_value.

Parameters:
  • size (int_scalars) – Size of the array (only rank-1 arrays supported)

  • fill_value (int_scalars) – Value with which the array will be filled

  • dtype (all_scalars) – Resulting array type, default float64

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

array of the requested size and dtype filled with fill_value

Return type:

pdarray or Strings

Raises:

TypeError – Raised if the supplied dtype is not supported or if the size parameter is neither an int nor a str that is parseable to an int.

See also

zeros, ones

Examples

>>> ak.full(5, 7, dtype=ak.int64)
array([7, 7, 7, 7, 7])
>>> ak.full(5, 9, dtype=ak.float64)
array([9, 9, 9, 9, 9])
>>> ak.full(5, 5, dtype=ak.bool)
array([True, True, True, True, True])
arkouda.full_like(pda: arkouda.pdarrayclass.pdarray, fill_value: arkouda.dtypes.numeric_scalars) arkouda.pdarrayclass.pdarray[source]

Create a pdarray filled with fill_value of the same size and dtype as an existing pdarray.

Parameters:
  • pda (pdarray) – Array to use for size and dtype

  • fill_value (int_scalars) – Value with which the array will be filled

Returns:

Equivalent to ak.full(pda.size, fill_value, pda.dtype)

Return type:

pdarray

Raises:

TypeError – Raised if the pda parameter is not a pdarray.

See also

ones_like, zeros_like

Notes

Logic for generating the pdarray is delegated to the ak.full method. Accordingly, the supported dtypes match are defined by the ak.full method.

Examples

>>> full = ak.full(5, 7, dtype=ak.int64)
>>> ak.full_like(full)
array([7, 7, 7, 7, 7])
>>> full = ak.full(5, 9, dtype=ak.float64)
>>> ak.full_like(full)
array([9, 9, 9, 9, 9])
>>> full = ak.full(5, 5, dtype=ak.bool)
>>> ak.full_like(full)
array([True, True, True, True, True])
arkouda.gen_ranges(starts, ends, stride=1, return_lengths=False)[source]

Generate a segmented array of variable-length, contiguous ranges between pairs of start- and end-points.

Parameters:
  • starts (pdarray, int64) – The start value of each range

  • ends (pdarray, int64) – The end value (exclusive) of each range

  • stride (int) – Difference between successive elements of each range

  • return_lengths (bool, optional) – Whether or not to return the lengths of each segment. Default False.

Returns:

  • segments (pdarray, int64) – The starting index of each range in the resulting array

  • ranges (pdarray, int64) – The actual ranges, flattened into a single array

  • lengths (pdarray, int64) – The lengths of each segment. Only returned if return_lengths=True.

arkouda.gen_ranges(starts, ends, stride=1, return_lengths=False)[source]

Generate a segmented array of variable-length, contiguous ranges between pairs of start- and end-points.

Parameters:
  • starts (pdarray, int64) – The start value of each range

  • ends (pdarray, int64) – The end value (exclusive) of each range

  • stride (int) – Difference between successive elements of each range

  • return_lengths (bool, optional) – Whether or not to return the lengths of each segment. Default False.

Returns:

  • segments (pdarray, int64) – The starting index of each range in the resulting array

  • ranges (pdarray, int64) – The actual ranges, flattened into a single array

  • lengths (pdarray, int64) – The lengths of each segment. Only returned if return_lengths=True.

arkouda.generic_concat(items, ordered=True)[source]
arkouda.getArkoudaLogger(name: str, handlers: List[logging.Handler] | None = None, logFormat: str | None = ArkoudaLogger.DEFAULT_LOG_FORMAT, logLevel: LogLevel | None = None) ArkoudaLogger[source]

A convenience method for instantiating an ArkoudaLogger that retrieves the logging level from the ARKOUDA_LOG_LEVEL env variable

Parameters:
  • name (str) – The name of the ArkoudaLogger

  • handlers (List[Handler]) – A list of logging.Handler objects, if None, a list consisting of one StreamHandler named ‘console-handler’ is generated and configured

  • logFormat (str) – The format for log messages, defaults to the following format: ‘[%(name)s] Line %(lineno)d %(levelname)s: %(message)s’

Return type:

ArkoudaLogger

Raises:

TypeError – Raised if either name or logFormat is not a str object or if handlers is not a list of str objects

Notes

Important note: if a list of 1..n logging.Handler objects is passed in, and dynamic changes to 1..n handlers is desired, set a name for each Handler object as follows: handler.name = <desired name>, which will enable retrieval and updates for the specified handler.

arkouda.get_byteorder(dt: numpy.dtype) str[source]

Get a concrete byteorder (turns ‘=’ into ‘<’ or ‘>’)

arkouda.get_callback(x)[source]
arkouda.get_columns(filenames: str | List[str], col_delim: str = ',', allow_errors: bool = False) List[str][source]

Get a list of column names from CSV file(s).

arkouda.get_datasets(filenames: str | List[str], allow_errors: bool = False, column_delim: str = ',', read_nested: bool = True) List[str][source]

Get the names of the datasets in the provide files

Parameters:
  • filenames (str or List[str]) – Name of the file/s from which to return datasets

  • allow_errors (bool) – Default: False Whether or not to allow errors while accessing datasets

  • column_delim (str) – Column delimiter to be used if dataset is CSV. Otherwise, unused.

  • read_nested (bool) – Default True, when True, SegArray objects will be read from the file. When False, SegArray (or other nested Parquet columns) will be ignored. Only used for Parquet Files.

Return type:

List[str] of names of the datasets

Raises:

RuntimeError

  • If no datasets are returned

Notes

  • This function currently supports HDF5 and Parquet formats.

  • Future updates to Parquet will deprecate this functionality on that format,

but similar support will be added for Parquet at that time. - If a list of files is provided, only the datasets in the first file will be returned

See also

ls

arkouda.get_filetype(filenames: str | List[str]) str[source]

Get the type of a file accessible to the server. Supported file types and possible return strings are ‘HDF5’ and ‘Parquet’.

Parameters:

filenames (Union[str, List[str]]) – A file or list of files visible to the arkouda server

Returns:

Type of the file returned as a string, either ‘HDF5’, ‘Parquet’ or ‘CSV

Return type:

str

Raises:

ValueError – Raised if filename is empty or contains only whitespace

Notes

  • When list provided, it is assumed that all files are the same type

  • CSV Files without the Arkouda Header are not supported

arkouda.get_null_indices(filenames: str | List[str], datasets: str | List[str] | None = None) arkouda.pdarrayclass.pdarray | Mapping[str, arkouda.pdarrayclass.pdarray][source]

Get null indices of a string column in a Parquet file.

Parameters:
  • filenames (list or str) – Either a list of filenames or shell expression

  • datasets (list or str or None) – (List of) name(s) of dataset(s) to read. Each dataset must be a string column. There is no default value for this function, the datasets to be read must be specified.

Returns:

  • For a single dataset returns an Arkouda pdarray and for multiple datasets

  • returns a dictionary of Arkouda pdarrays – Dictionary of {datasetName: pdarray}

Raises:
  • RuntimeError – Raised if one or more of the specified files cannot be opened.

  • TypeError – Raised if we receive an unknown arkouda_type returned from the server

See also

get_datasets, ls

arkouda.get_server_byteorder() str[source]

Get the server’s byteorder

arkouda.hash(pda: arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | SegArray | Categorical | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | SegArray | Categorical], full: bool = True) Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray] | arkouda.pdarrayclass.pdarray[source]

Return an element-wise hash of the array or list of arrays.

Parameters:
  • pda (Union[pdarray, Strings, Segarray, Categorical],) – List[Union[pdarray, Strings, Segarray, Categorical]]]

  • full (bool) – This is only used when a single pdarray is passed into hash By default, a 128-bit hash is computed and returned as two int64 arrays. If full=False, then a 64-bit hash is computed and returned as a single int64 array.

Returns:

If full=True or a list of pdarrays is passed, a 2-tuple of pdarrays containing the high and low 64 bits of each hash, respectively. If full=False and a single pdarray is passed, a single pdarray containing a 64-bit hash

Return type:

hashes

Raises:

TypeError – Raised if the parameter is not a pdarray

Notes

In the case of a single pdarray being passed, this function uses the SIPhash algorithm, which can output either a 64-bit or 128-bit hash. However, the 64-bit hash runs a significant risk of collisions when applied to more than a few million unique values. Unless the number of unique values is known to be small, the 128-bit hash is strongly recommended.

Note that this hash should not be used for security, or for any cryptographic application. Not only is SIPhash not intended for such uses, but this implementation employs a fixed key for the hash, which makes it possible for an adversary with control over input to engineer collisions.

In the case of a list of pdrrays, Strings, Categoricals, or Segarrays being passed, a non-linear function must be applied to each array since hashes of subsequent arrays cannot be simply XORed because equivalent values will cancel each other out, hence we do a rotation by the ordinal of the array.

arkouda.hist_all(ak_df: arkouda.dataframe.DataFrame, cols: list = [])[source]

Create a grid plot histogramming all numeric columns in ak dataframe

Parameters:
  • ak_df (ak.DataFrame) – Full Arkouda DataFrame containing data to be visualized

  • cols (list) – (Optional) A specified list of columns to be plotted

Notes

This function displays the plot.

Examples

>>> import arkouda as ak
>>> from arkouda.plotting import hist_all
>>> ak_df = ak.DataFrame({"a": ak.array(np.random.randn(100)),
                          "b": ak.array(np.random.randn(100)),
                          "c": ak.array(np.random.randn(100)),
                          "d": ak.array(np.random.randn(100))
                          })
>>> hist_all(ak_df)
arkouda.histogram(pda: arkouda.pdarrayclass.pdarray, bins: arkouda.dtypes.int_scalars = 10) Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray][source]

Compute a histogram of evenly spaced bins over the range of an array.

Parameters:
  • pda (pdarray) – The values to histogram

  • bins (int_scalars) – The number of equal-size bins to use (default: 10)

Returns:

Bin edges and The number of values present in each bin

Return type:

(pdarray, Union[pdarray, int64 or float64])

Raises:
  • TypeError – Raised if the parameter is not a pdarray or if bins is not an int.

  • ValueError – Raised if bins < 1

  • NotImplementedError – Raised if pdarray dtype is bool or uint8

Notes

The bins are evenly spaced in the interval [pda.min(), pda.max()].

Examples

>>> import matplotlib.pyplot as plt
>>> A = ak.arange(0, 10, 1)
>>> nbins = 3
>>> h, b = ak.histogram(A, bins=nbins)
>>> h
array([3, 3, 4])
>>> b
array([0., 3., 6., 9.])

# To plot, export the left edges and the histogram to NumPy >>> plt.plot(b.to_ndarray()[::-1], h.to_ndarray())

arkouda.histogram(pda: arkouda.pdarrayclass.pdarray, bins: arkouda.dtypes.int_scalars = 10) Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray][source]

Compute a histogram of evenly spaced bins over the range of an array.

Parameters:
  • pda (pdarray) – The values to histogram

  • bins (int_scalars) – The number of equal-size bins to use (default: 10)

Returns:

Bin edges and The number of values present in each bin

Return type:

(pdarray, Union[pdarray, int64 or float64])

Raises:
  • TypeError – Raised if the parameter is not a pdarray or if bins is not an int.

  • ValueError – Raised if bins < 1

  • NotImplementedError – Raised if pdarray dtype is bool or uint8

Notes

The bins are evenly spaced in the interval [pda.min(), pda.max()].

Examples

>>> import matplotlib.pyplot as plt
>>> A = ak.arange(0, 10, 1)
>>> nbins = 3
>>> h, b = ak.histogram(A, bins=nbins)
>>> h
array([3, 3, 4])
>>> b
array([0., 3., 6., 9.])

# To plot, export the left edges and the histogram to NumPy >>> plt.plot(b.to_ndarray()[::-1], h.to_ndarray())

arkouda.histogram2d(x: arkouda.pdarrayclass.pdarray, y: arkouda.pdarrayclass.pdarray, bins: arkouda.dtypes.int_scalars | Sequence[arkouda.dtypes.int_scalars] = 10) Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray][source]

Compute the bi-dimensional histogram of two data samples with evenly spaced bins

Parameters:
  • x (pdarray) – A pdarray containing the x coordinates of the points to be histogrammed.

  • y (pdarray) – A pdarray containing the y coordinates of the points to be histogrammed.

  • bins (int_scalars or [int, int] = 10) – The number of equal-size bins to use. If int, the number of bins for the two dimensions (nx=ny=bins). If [int, int], the number of bins in each dimension (nx, ny = bins). Defaults to 10

Returns:

  • hist (ArrayView, shape(nx, ny)) – The bi-dimensional histogram of samples x and y. Values in x are histogrammed along the first dimension and values in y are histogrammed along the second dimension.

  • x_edges (pdarray) – The bin edges along the first dimension.

  • y_edges (pdarray) – The bin edges along the second dimension.

Raises:
  • TypeError – Raised if x or y parameters are not pdarrays or if bins is not an int or (int, int).

  • ValueError – Raised if bins < 1

  • NotImplementedError – Raised if pdarray dtype is bool or uint8

See also

histogram

Notes

The x bins are evenly spaced in the interval [x.min(), x.max()] and y bins are evenly spaced in the interval [y.min(), y.max()].

Examples

>>> x = ak.arange(0, 10, 1)
>>> y = ak.arange(9, -1, -1)
>>> nbins = 3
>>> h, x_edges, y_edges = ak.histogram2d(x, y, bins=nbins)
>>> h
array([[0, 0, 3],
       [0, 2, 1],
       [3, 1, 0]])
>>> x_edges
array([0.0 3.0 6.0 9.0])
>>> x_edges
array([0.0 3.0 6.0 9.0])
arkouda.histogramdd(sample: Sequence[arkouda.pdarrayclass.pdarray], bins: arkouda.dtypes.int_scalars | Sequence[arkouda.dtypes.int_scalars] = 10) Tuple[arkouda.pdarrayclass.pdarray, Sequence[arkouda.pdarrayclass.pdarray]][source]

Compute the multidimensional histogram of data in sample with evenly spaced bins.

Parameters:
  • sample (Sequence[pdarray]) – A sequence of pdarrays containing the coordinates of the points to be histogrammed.

  • bins (int_scalars or Sequence[int_scalars] = 10) – The number of equal-size bins to use. If int, the number of bins for all dimensions (nx=ny=…=bins). If [int, int, …], the number of bins in each dimension (nx, ny, … = bins). Defaults to 10

Returns:

  • hist (ArrayView, shape(nx, ny, …, nd)) – The multidimensional histogram of pdarrays in sample. Values in first pdarray are histogrammed along the first dimension. Values in second pdarray are histogrammed along the second dimension and so on.

  • edges (List[pdarray]) – A list of pdarrays containing the bin edges for each dimension.

Raises:
  • ValueError – Raised if bins < 1

  • NotImplementedError – Raised if pdarray dtype is bool or uint8

See also

histogram

Notes

The bins for each dimension, m, are evenly spaced in the interval [m.min(), m.max()]

Examples

>>> x = ak.arange(0, 10, 1)
>>> y = ak.arange(9, -1, -1)
>>> z = ak.where(x % 2 == 0, x, y)
>>> h, edges = ak.histogramdd((x, y,z), bins=(2,2,5))
>>> h
array([[[0, 0, 0, 0, 0],
        [1, 1, 1, 1, 1]],
[[1, 1, 1, 1, 1],

[0, 0, 0, 0, 0]]])

>>> edges
[array([0.0 4.5 9.0]),
 array([0.0 4.5 9.0]),
 array([0.0 1.6 3.2 4.8 6.4 8.0])]
arkouda.import_data(read_path: str, write_file: str | None = None, return_obj: bool = True, index: bool = False)[source]

Import data from a file saved by Pandas (HDF5/Parquet) to Arkouda object and/or a file formatted to be read by Arkouda.

Parameters:
  • read_path (str) – path to file where pandas data is stored. This can be glob expression for parquet formats.

  • write_file (str, optional) – path to file to write arkouda formatted data to. Only write file if provided

  • return_obj (bool, optional) – Default True. When True return the Arkouda DataFrame object, otherwise return None

  • index (bool, optional) – Default False. When True, maintain the indexes loaded from the pandas file

Raises:
  • RuntimeWarning

    • Export attempted on Parquet file. Arkouda formatted Parquet files are readable by pandas.

  • RuntimeError

    • Unsupported file type

Returns:

When return_obj=True

Return type:

pd.DataFrame

See also

pandas.DataFrame.to_parquet, pandas.DataFrame.to_hdf, pandas.DataFrame.read_parquet, pandas.DataFrame.read_hdf, ak.export

Notes

  • Import can only be performed from hdf5 or parquet files written by pandas.

arkouda.in1d(pda1: arkouda.groupbyclass.groupable, pda2: arkouda.groupbyclass.groupable, assume_unique: bool = False, symmetric: bool = False, invert: bool = False) arkouda.pdarrayclass.pdarray | arkouda.groupbyclass.groupable[source]

Test whether each element of a 1-D array is also present in a second array.

Returns a boolean array the same length as pda1 that is True where an element of pda1 is in pda2 and False otherwise.

Support multi-level – test membership of rows of a in the set of rows of b.

Parameters:
  • a (list of pdarrays, pdarray, Strings, or Categorical) – Rows are elements for which to test membership in b

  • b (list of pdarrays, pdarray, Strings, or Categorical) – Rows are elements of the set in which to test membership

  • assume_unique (bool) – If true, assume rows of a and b are each unique and sorted. By default, sort and unique them explicitly.

  • symmetric (bool) – Return in1d(pda1, pda2), in1d(pda2, pda1) when pda1 and 2 are single items.

  • invert (bool, optional) – If True, the values in the returned array are inverted (that is, False where an element of pda1 is in pda2 and True otherwise). Default is False. ak.in1d(a, b, invert=True) is equivalent to (but is faster than) ~ak.in1d(a, b).

Return type:

True for each row in a that is contained in b

Return Type

pdarray, bool

Notes

Only works for pdarrays of int64 dtype, float64, Strings, or Categorical

arkouda.in1d(pda1: arkouda.groupbyclass.groupable, pda2: arkouda.groupbyclass.groupable, assume_unique: bool = False, symmetric: bool = False, invert: bool = False) arkouda.pdarrayclass.pdarray | arkouda.groupbyclass.groupable[source]

Test whether each element of a 1-D array is also present in a second array.

Returns a boolean array the same length as pda1 that is True where an element of pda1 is in pda2 and False otherwise.

Support multi-level – test membership of rows of a in the set of rows of b.

Parameters:
  • a (list of pdarrays, pdarray, Strings, or Categorical) – Rows are elements for which to test membership in b

  • b (list of pdarrays, pdarray, Strings, or Categorical) – Rows are elements of the set in which to test membership

  • assume_unique (bool) – If true, assume rows of a and b are each unique and sorted. By default, sort and unique them explicitly.

  • symmetric (bool) – Return in1d(pda1, pda2), in1d(pda2, pda1) when pda1 and 2 are single items.

  • invert (bool, optional) – If True, the values in the returned array are inverted (that is, False where an element of pda1 is in pda2 and True otherwise). Default is False. ak.in1d(a, b, invert=True) is equivalent to (but is faster than) ~ak.in1d(a, b).

Return type:

True for each row in a that is contained in b

Return Type

pdarray, bool

Notes

Only works for pdarrays of int64 dtype, float64, Strings, or Categorical

arkouda.in1d(pda1: arkouda.groupbyclass.groupable, pda2: arkouda.groupbyclass.groupable, assume_unique: bool = False, symmetric: bool = False, invert: bool = False) arkouda.pdarrayclass.pdarray | arkouda.groupbyclass.groupable[source]

Test whether each element of a 1-D array is also present in a second array.

Returns a boolean array the same length as pda1 that is True where an element of pda1 is in pda2 and False otherwise.

Support multi-level – test membership of rows of a in the set of rows of b.

Parameters:
  • a (list of pdarrays, pdarray, Strings, or Categorical) – Rows are elements for which to test membership in b

  • b (list of pdarrays, pdarray, Strings, or Categorical) – Rows are elements of the set in which to test membership

  • assume_unique (bool) – If true, assume rows of a and b are each unique and sorted. By default, sort and unique them explicitly.

  • symmetric (bool) – Return in1d(pda1, pda2), in1d(pda2, pda1) when pda1 and 2 are single items.

  • invert (bool, optional) – If True, the values in the returned array are inverted (that is, False where an element of pda1 is in pda2 and True otherwise). Default is False. ak.in1d(a, b, invert=True) is equivalent to (but is faster than) ~ak.in1d(a, b).

Return type:

True for each row in a that is contained in b

Return Type

pdarray, bool

Notes

Only works for pdarrays of int64 dtype, float64, Strings, or Categorical

arkouda.in1d_intervals(vals, intervals, symmetric=False)[source]

Test each value for membership in any of a set of half-open (pythonic) intervals.

Parameters:
  • vals (pdarray(int, float)) – Values to test for membership in intervals

  • intervals (2-tuple of pdarrays) – Non-overlapping, half-open intervals, as a tuple of (lower_bounds_inclusive, upper_bounds_exclusive)

  • symmetric (bool) – If True, also return boolean pdarray indicating which intervals contained one or more query values.

Returns:

  • pdarray(bool) – Array of same length as <vals>, True if corresponding value is included in any of the ranges defined by (low[i], high[i]) inclusive.

  • pdarray(bool) (if symmetric=True) – Array of same length as number of intervals, True if corresponding interval contains any of the values in <vals>.

Notes

First return array is equivalent to the following:

((vals >= intervals[0][0]) & (vals < intervals[1][0])) | ((vals >= intervals[0][1]) & (vals < intervals[1][1])) | … ((vals >= intervals[0][-1]) & (vals < intervals[1][-1]))

But much faster when testing many ranges.

Second (optional) return array is equivalent to:

((intervals[0] <= vals[0]) & (intervals[1] > vals[0])) | ((intervals[0] <= vals[1]) & (intervals[1] > vals[1])) | … ((intervals[0] <= vals[-1]) & (intervals[1] > vals[-1]))

But much faster when vals is non-trivial size.

arkouda.indexof1d(keys: arkouda.groupbyclass.groupable, arr: arkouda.groupbyclass.groupable) arkouda.pdarrayclass.pdarray | arkouda.groupbyclass.groupable[source]

Returns an integer array of the index values where the values of the first array appear in the second.

Parameters:
Returns:

The indices of the values of keys in arr.

Return type:

pdarray, int

Raises:
  • TypeError – Raised if either keys or arr is not a pdarray, Strings, or Categorical object

  • RuntimeError – Raised if the dtype of either array is not supported

arkouda.information(names: List[str] | str = RegisteredSymbols) str[source]

Returns JSON formatted string containing information about the objects in names

Parameters:

names (Union[List[str], str]) – names is either the name of an object or list of names of objects to retrieve info if names is ak.AllSymbols, retrieves info for all symbols in the symbol table if names is ak.RegisteredSymbols, retrieves info for all symbols in the registry

Returns:

JSON formatted string containing a list of information for each object in names

Return type:

str

Raises:

RuntimeError – Raised if a server-side error is thrown in the process of retrieving information about the objects in names

arkouda.int16
arkouda.int32
arkouda.int64
arkouda.int64
arkouda.int8
arkouda.intTypes
arkouda.intTypes
arkouda.intTypes
arkouda.int_scalars
arkouda.int_scalars
arkouda.int_scalars
arkouda.intersect(a, b, positions=True, unique=False)[source]

Find the intersection of two arkouda arrays.

This function can be especially useful when positions=True so that the caller gets the indices of values present in both arrays.

Parameters:
  • a (Strings or pdarray) – An array of strings.

  • b (Strings or pdarray) – An array of strings.

  • positions (bool, default=True) – Return tuple of boolean pdarrays that indicate positions in a and b of the intersection values.

  • unique (bool, default=False) – If the number of distinct values in a (and b) is equal to the size of a (and b), there is a more efficient method to compute the intersection.

Returns:

The indices of a and b where any element occurs at least once in both arrays.

Return type:

(arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray) or arkouda.pdarrayclass.pdarray

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> a = ak.arange(10)
>>> print(a)
[0 1 2 3 4 5 6 7 8 9]
>>> b = 2 * ak.arange(10)
>>> print(b)
[0 2 4 6 8 10 12 14 16 18]
>>> intersect(a,b, positions=True)
(array([True False True False True False True False True False]),
array([True True True True True False False False False False]))
>>> intersect(a,b, positions=False)
array([0 2 4 6 8])
arkouda.intersect1d(pda1: arkouda.groupbyclass.groupable, pda2: arkouda.groupbyclass.groupable, assume_unique: bool = False) arkouda.pdarrayclass.pdarray | arkouda.groupbyclass.groupable[source]

Find the intersection of two arrays.

Return the sorted, unique values that are in both of the input arrays.

Parameters:
  • pda1 (pdarray/Sequence[pdarray, Strings, Categorical]) – Input array/Sequence of groupable objects

  • pda2 (pdarray/List) – Input array/sequence of groupable objects

  • assume_unique (bool) – If True, the input arrays are both assumed to be unique, which can speed up the calculation. Default is False.

Returns:

Sorted 1D array/List of sorted pdarrays of common and unique elements.

Return type:

pdarray/groupable

Raises:
  • TypeError – Raised if either pda1 or pda2 is not a pdarray

  • RuntimeError – Raised if the dtype of either pdarray is not supported

Notes

ak.intersect1d is not supported for bool or float64 pdarrays

Examples

>>>
# 1D Example
>>> ak.intersect1d([1, 3, 4, 3], [3, 1, 2, 1])
array([1, 3])
# Multi-Array Example
>>> a = ak.arange(5)
>>> b = ak.array([1, 5, 3, 4, 2])
>>> c = ak.array([1, 4, 3, 2, 5])
>>> d = ak.array([1, 2, 3, 5, 4])
>>> multia = [a, a, a]
>>> multib = [b, c, d]
>>> ak.intersect1d(multia, multib)
[array([1, 3]), array([1, 3]), array([1, 3])]
arkouda.interval_lookup(keys, values, arguments, fillvalue=-1, tiebreak=None, hierarchical=False)[source]

Apply a function defined over intervals to an array of arguments.

Parameters:
  • keys (2-tuple of (sequences of) pdarrays) – Tuple of closed intervals expressed as (lower_bounds_inclusive, upper_bounds_inclusive). Must have same dtype(s) as vals.

  • values (pdarray) – Function value to return for each entry in keys.

  • arguments ((sequences of) pdarray) – Values to search for in intervals. If multiple arrays, each “row” is an item.

  • fillvalue (scalar) – Default value to return when argument is not in any interval.

  • tiebreak ((optional) pdarray, numeric) – When an argument is present in more than one key interval, the interval with the lowest tiebreak value will be chosen. If no tiebreak is given, the first valid key interval will be chosen.

Returns:

Value of function corresponding to the keys interval containing each argument, or fillvalue if argument not in any interval.

Return type:

pdarray

arkouda.intx(a, b)[source]

Find all the rows that are in both dataframes. Columns should be in identical order.

Note: does not work for columns of floating point values, but does work for Strings, pdarrays of int64 type, and Categorical should work.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> a = ak.DataFrame({'a':ak.arange(5),'b': 2* ak.arange(5)})
>>> display(a)

a

b

0

0

0

1

1

2

2

2

4

3

3

6

4

4

8

>>> b = ak.DataFrame({'a':ak.arange(5),'b':ak.array([0,3,4,7,8])})
>>> display(b)

a

b

0

0

0

1

1

3

2

2

4

3

3

7

4

4

8

>>> intx(a,b)
>>> intersect_df = a[intx(a,b)]
>>> display(intersect_df)

a

b

0

0

0

1

2

4

2

4

8

arkouda.invert_permutation(perm)[source]

Find the inverse of a permutation array.

Parameters:

perm (pdarray) – The permutation array.

Returns:

The inverse of the permutation array.

Return type:

arkouda.pdarrayclass.pdarray

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> from arkouda.index import Index
>>> i = Index(ak.array([1,2,0,5,4]))
>>> perm = i.argsort()
>>> print(perm)
[2 0 1 4 3]
>>> invert_permutation(perm)
array([1 2 0 4 3])
arkouda.ip_address(values)[source]

Convert values to an Arkouda array of IP addresses.

Parameters:

values (list-like, integer pdarray, or IPv4) – The integer IP addresses or IPv4 object.

Returns:

The same IP addresses as an Arkouda array

Return type:

IPv4

Notes

This helper is intended to help future proof changes made to accomodate IPv6 and to prevent errors if a user inadvertently casts a IPv4 instead of a int64 pdarray. It can also be used for importing Python lists of IP addresses into Arkouda.

arkouda.isSupportedInt(num)[source]
arkouda.isSupportedInt(num)[source]
arkouda.isSupportedInt(num)[source]
arkouda.isSupportedNumber(num)[source]
arkouda.is_cosorted(arrays)[source]

Return True iff the arrays are cosorted, i.e., if the arrays were columns in a table then the rows are sorted.

Parameters:

arrays (list-like of pdarrays) – Arrays to check for cosortedness

Returns:

True iff arrays are cosorted.

Return type:

bool

Raises:
  • ValueError – Raised if arrays are not the same length

  • TypeError – Raised if arrays is not a list-like of pdarrays

arkouda.is_ipv4(ip: arkouda.pdarrayclass.pdarray | IPv4, ip2: arkouda.pdarrayclass.pdarray | None = None) arkouda.pdarrayclass.pdarray[source]

Indicate which values are ipv4 when passed data containing IPv4 and IPv6 values.

Parameters:
  • ip (pdarray (int64) or ak.IPv4)

  • in. (IPv4 value. High Bits of IPv6 if IPv6 is passed)

  • ip2 (pdarray (int64), Optional)

  • well. (Low Bits of IPv6. This is added for support when dealing with data that contains IPv6 as)

Return type:

pdarray of bools indicating which indexes are IPv4.

See also

ak.is_ipv6

arkouda.is_ipv6(ip: arkouda.pdarrayclass.pdarray | IPv4, ip2: arkouda.pdarrayclass.pdarray | None = None) arkouda.pdarrayclass.pdarray[source]

Indicate which values are ipv6 when passed data containing IPv4 and IPv6 values.

Parameters:
  • ip (pdarray (int64) or ak.IPv4)

  • IPv6. (High Bits of)

  • ip2 (pdarray (int64), Optional)

  • IPv6 (Low Bits of)

Return type:

pdarray of bools indicating which indexes are IPv6.

See also

ak.is_ipv4

arkouda.is_registered(name: str, as_component: bool = False) bool[source]

Determine if the name provided is associated with a registered Object

Parameters:
  • name (str) – The name to check for in the registry

  • as_component (bool) – Default: False When True, the name will be checked to determine if it is registered as a component of a registered object

Return type:

bool

arkouda.is_sorted(pda: pdarray) numpy.bool_[source]

Return True iff the array is monotonically non-decreasing.

Parameters:

pda (pdarray) – The pdarray instance to be evaluated

Returns:

Indicates if the array is monotonically non-decreasing

Return type:

bool

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

arkouda.is_sorted(pda: pdarray) numpy.bool_[source]

Return True iff the array is monotonically non-decreasing.

Parameters:

pda (pdarray) – The pdarray instance to be evaluated

Returns:

Indicates if the array is monotonically non-decreasing

Return type:

bool

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

arkouda.isfinite(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise isfinite check applied to the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing boolean values indicating whether the input array elements are finite

Return type:

pdarray

Raises:
  • TypeError – Raised if the parameter is not a pdarray

  • RuntimeError – if the underlying pdarray is not float-based

Examples

>>> ak.isfinite(ak.array[1.0, 2.0, ak.inf])
array([True, True, False])
arkouda.isinf(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise isinf check applied to the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing boolean values indicating whether the input array elements are infinite

Return type:

pdarray

Raises:
  • TypeError – Raised if the parameter is not a pdarray

  • RuntimeError – if the underlying pdarray is not float-based

Examples

>>> ak.isinf(ak.array[1.0, 2.0, ak.inf])
array([False, False, True])
arkouda.isnan(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise isnan check applied to the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing boolean values indicating whether the input array elements are NaN

Return type:

pdarray

Raises:
  • TypeError – Raised if the parameter is not a pdarray

  • RuntimeError – if the underlying pdarray is not float-based

Examples

>>> ak.isnan(ak.array[1.0, 2.0, 1.0 / 0.0])
array([False, False, True])
arkouda.isnan(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise isnan check applied to the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing boolean values indicating whether the input array elements are NaN

Return type:

pdarray

Raises:
  • TypeError – Raised if the parameter is not a pdarray

  • RuntimeError – if the underlying pdarray is not float-based

Examples

>>> ak.isnan(ak.array[1.0, 2.0, 1.0 / 0.0])
array([False, False, True])
arkouda.join_on_eq_with_dt(a1: arkouda.pdarrayclass.pdarray, a2: arkouda.pdarrayclass.pdarray, t1: arkouda.pdarrayclass.pdarray, t2: arkouda.pdarrayclass.pdarray, dt: int | numpy.int64, pred: str, result_limit: int | numpy.int64 = 1000) Tuple[arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray][source]

Performs an inner-join on equality between two integer arrays where the time-window predicate is also true

Parameters:
  • a1 (pdarray, int64) – pdarray to be joined

  • a2 (pdarray, int64) – pdarray to be joined

  • t1 (pdarray) – timestamps in millis corresponding to the a1 pdarray

  • t2 (pdarray) – timestamps in millis corresponding to the a2 pdarray

  • dt (Union[int,np.int64]) – time delta

  • pred (str) – time window predicate

  • result_limit (Union[int,np.int64]) – size limit for returned result

Returns:

  • result_array_one (pdarray, int64) – a1 indices where a1 == a2

  • result_array_one (pdarray, int64) – a2 indices where a2 == a1

Raises:
  • TypeError – Raised if a1, a2, t1, or t2 is not a pdarray, or if dt or result_limit is not an int

  • ValueError – if a1, a2, t1, or t2 dtype is not int64, pred is not ‘true_dt’, ‘abs_dt’, or ‘pos_dt’, or result_limit is < 0

arkouda.left_align(left, right)[source]

Map two arrays of sparse identifiers to the 0-up index set implied by the left array, discarding values from right that do not appear in left.

arkouda.linspace(start: arkouda.dtypes.numeric_scalars, stop: arkouda.dtypes.numeric_scalars, length: arkouda.dtypes.int_scalars) arkouda.pdarrayclass.pdarray[source]

Create a pdarray of linearly-spaced floats in a closed interval.

Parameters:
  • start (numeric_scalars) – Start of interval (inclusive)

  • stop (numeric_scalars) – End of interval (inclusive)

  • length (int_scalars) – Number of points

Returns:

Array of evenly spaced float values along the interval

Return type:

pdarray, float64

Raises:

TypeError – Raised if start or stop is not a float or int or if length is not an int

See also

arange

Notes

If that start is greater than stop, the pdarray values are generated in descending order.

Examples

>>> ak.linspace(0, 1, 5)
array([0, 0.25, 0.5, 0.75, 1])
>>> ak.linspace(start=1, stop=0, length=5)
array([1, 0.75, 0.5, 0.25, 0])
>>> ak.linspace(start=-5, stop=0, length=5)
array([-5, -3.75, -2.5, -1.25, 0])
arkouda.list_registry(detailed: bool = False)[source]

Return a list containing the names of all registered objects

Parameters:

detailed (bool) – Default = False Return details of registry objects. Currently includes object type for any objects

Returns:

Dict containing keys “Components” and “Objects”.

Return type:

dict

Raises:

RuntimeError – Raised if there’s a server-side error thrown

arkouda.list_symbol_table() List[str][source]

Return a list containing the names of all objects in the symbol table

Parameters:

None

Returns:

List of all object names in the symbol table

Return type:

list

Raises:

RuntimeError – Raised if there’s a server-side error thrown

arkouda.load(path_prefix: str, file_format: str = 'INFER', dataset: str = 'array', calc_string_offsets: bool = False, column_delim: str = ',') arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView | arkouda.categorical.Categorical | arkouda.dataframe.DataFrame | arkouda.client_dtypes.IPv4 | arkouda.timeclass.Datetime | arkouda.timeclass.Timedelta | arkouda.index.Index | Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView | arkouda.categorical.Categorical | arkouda.dataframe.DataFrame | arkouda.client_dtypes.IPv4 | arkouda.timeclass.Datetime | arkouda.timeclass.Timedelta | arkouda.index.Index][source]

Load a pdarray previously saved with pdarray.save().

Parameters:
  • path_prefix (str) – Filename prefix used to save the original pdarray

  • file_format (str) – ‘INFER’, ‘HDF5’ or ‘Parquet’. Defaults to ‘INFER’. Used to indicate the file type being loaded. If INFER, this will be detected during processing

  • dataset (str) – Dataset name where the pdarray was saved, defaults to ‘array’

  • calc_string_offsets (bool) – If True the server will ignore Segmented Strings ‘offsets’ array and derive it from the null-byte terminators. Defaults to False currently

  • column_delim (str) – Column delimiter to be used if dataset is CSV. Otherwise, unused.

Returns:

The pdarray or Strings that was previously saved

Return type:

Union[pdarray, Strings]

Raises:
  • TypeError – Raised if either path_prefix or dataset is not a str

  • ValueError – Raised if invalid file_format or if the dataset is not present in all hdf5 files or if the path_prefix does not correspond to files accessible to Arkouda

  • RuntimeError – Raised if the hdf5 files are present but there is an error in opening one or more of them

Notes

If you have a previously saved Parquet file that is raising a FileNotFound error, try loading it with a .parquet appended to the prefix_path. Parquet files were previously ALWAYS stored with a .parquet extension.

ak.load does not support loading a single file. For loading single HDF5 files without the _LOCALE#### suffix please use ak.read().

CSV files without the Arkouda Header are not supported.

Examples

>>> # Loading from file without extension
>>> obj = ak.load('path/prefix')
Loads the array from numLocales files with the name ``cwd/path/name_prefix_LOCALE####``.
The file type is inferred during processing.
>>> # Loading with an extension (HDF5)
>>> obj = ak.load('path/prefix.test')
Loads the object from numLocales files with the name ``cwd/path/name_prefix_LOCALE####.test`` where
#### is replaced by each locale numbers. Because filetype is inferred during processing,
the extension is not required to be a specific format.
arkouda.load_all(path_prefix: str, file_format: str = 'INFER', column_delim: str = ',', read_nested=True) Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.categorical.Categorical][source]

Load multiple pdarrays, Strings, SegArrays, or Categoricals previously saved with save_all().

Parameters:
  • path_prefix (str) – Filename prefix used to save the original pdarray

  • file_format (str) – ‘INFER’, ‘HDF5’, ‘Parquet’, or ‘CSV’. Defaults to ‘INFER’. Indicates the format being loaded. When ‘INFER’ the processing will detect the format Defaults to ‘INFER’

  • column_delim (str) – Column delimiter to be used if dataset is CSV. Otherwise, unused.

  • read_nested (bool) – Default True, when True, SegArray objects will be read from the file. When False, SegArray (or other nested Parquet columns) will be ignored. Parquet files only

Returns:

Dictionary of {datsetName: Union[pdarray, Strings, SegArray, Categorical]} with the previously saved pdarrays, Strings, SegArrays, or Categoricals

Return type:

Mapping[str, Union[pdarray, Strings, SegArray, Categorical]]

Raises:
  • TypeError: – Raised if path_prefix is not a str

  • ValueError – Raised if file_format/extension is encountered that is not hdf5 or parquet or if all datasets are not present in all hdf5/parquet files or if the path_prefix does not correspond to files accessible to Arkouda

  • RuntimeError – Raised if the hdf5 files are present but there is an error in opening one or more of them

See also

to_parquet, to_hdf, load, read

Notes

This function has been updated to determine the file extension based on the file format variable

This function will be deprecated when glob flags are added to read_* methods

CSV files without the Arkouda Header are not supported.

arkouda.log(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise natural log of the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing natural log values of the input array elements

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Notes

Logarithms with other bases can be computed as follows:

Examples

>>> A = ak.array([1, 10, 100])
# Natural log
>>> ak.log(A)
array([0, 2.3025850929940459, 4.6051701859880918])
# Log base 10
>>> ak.log(A) / np.log(10)
array([0, 1, 2])
# Log base 2
>>> ak.log(A) / np.log(2)
array([0, 3.3219280948873626, 6.6438561897747253])
arkouda.log10(x: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise base 10 log of the array.

Parameters:

x (pdarray) – array to compute on

Return type:

pdarray contain values of the base 10 log

arkouda.log1p(x: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise natural log of one plus the array.

Parameters:

x (pdarray) – array to compute on

Return type:

pdarray contain values of the natural log of one plus the array

arkouda.log2(x: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise base 2 log of the array.

Parameters:

x (pdarray) – array to compute on

Return type:

pdarray contain values of the base 2 log

arkouda.lookup(keys, values, arguments, fillvalue=-1)[source]

Apply the function defined by the mapping keys –> values to arguments.

Parameters:
  • keys ((sequence of) array-like) – The domain of the function. Entries must be unique (if a sequence of arrays is given, each row is treated as a tuple-valued entry).

  • values (pdarray) – The range of the function. Must be same length as keys.

  • arguments ((sequence of) array-like) – The arguments on which to evaluate the function. Must have same dtype (or tuple of dtypes, for a sequence) as keys.

  • fillvalue (scalar) – The default value to return for arguments not in keys.

Returns:

evaluated – The result of evaluating the function over arguments.

Return type:

pdarray

Notes

While the values cannot be Strings (or other complex objects), the same result can be achieved by passing an arange as the values, then using the return as indices into the desired object.

Examples

# Lookup numbers by two-word name >>> keys1 = ak.array([‘twenty’ for _ in range(5)]) >>> keys2 = ak.array([‘one’, ‘two’, ‘three’, ‘four’, ‘five’]) >>> values = ak.array([21, 22, 23, 24, 25]) >>> args1 = ak.array([‘twenty’, ‘thirty’, ‘twenty’]) >>> args2 = ak.array([‘four’, ‘two’, ‘two’]) >>> aku.lookup([keys1, keys2], values, [args1, args2]) array([24, -1, 22])

# Other direction requires an intermediate index >>> revkeys = values >>> revindices = ak.arange(values.size) >>> revargs = ak.array([24, 21, 22]) >>> idx = aku.lookup(revkeys, revindices, revargs) >>> keys1[idx], keys2[idx] (array([‘twenty’, ‘twenty’, ‘twenty’]), array([‘four’, ‘one’, ‘two’]))

arkouda.ls(filename: str, col_delim: str = ',', read_nested: bool = True) List[str][source]

This function calls the h5ls utility on a HDF5 file visible to the arkouda server or calls a function that imitates the result of h5ls on a Parquet file.

Parameters:
  • filename (str) – The name of the file to pass to the server

  • col_delim (str) – The delimiter used to separate columns if the file is a csv

  • read_nested (bool) – Default True, when True, SegArray objects will be read from the file. When False, SegArray (or other nested Parquet columns) will be ignored. Only used for Parquet files.

Returns:

The string output of the datasets from the server

Return type:

str

Raises:
  • TypeError – Raised if filename is not a str

  • ValueError – Raised if filename is empty or contains only whitespace

  • RuntimeError – Raised if error occurs in executing ls on an HDF5 file

  • Notes

    • This will need to be updated because Parquet will not technically support this when we update.

      Similar functionality will be added for Parquet in the future

    • For CSV files without headers, please use ls_csv

See also

ls_csv

arkouda.ls_csv(filename: str, col_delim: str = ',') List[str][source]

Used for identifying the datasets within a file when a CSV does not have a header.

Parameters:
  • filename (str) – The name of the file to pass to the server

  • col_delim (str) – The delimiter used to separate columns if the file is a csv

Returns:

The string output of the datasets from the server

Return type:

str

See also

ls

arkouda.max(pda: pdarray) arkouda.dtypes.numpy_scalars[source]

Return the maximum value of the array.

Parameters:

pda (pdarray) – Values for which to calculate the max

Returns:

The max calculated from the pda

Return type:

numpy_scalars

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

arkouda.maxk(pda: pdarray, k: arkouda.dtypes.int_scalars) pdarray[source]

Find the k maximum values of an array.

Returns the largest k values of an array, sorted

Parameters:
  • pda (pdarray) – Input array.

  • k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

The maximum k values from pda, sorted

Return type:

pdarray, int

Raises:
  • TypeError – Raised if pda is not a pdarray or k is not an integer

  • ValueError – Raised if the pda is empty or k < 1

Notes

This call is equivalent in value to:

a[ak.argsort(a)[k:]]

and generally outperforms this operation.

This reduction will see a significant drop in performance as k grows beyond a certain value. This value is system dependent, but generally about a k of 5 million is where performance degredation has been observed.

Examples

>>> A = ak.array([10,5,1,3,7,2,9,0])
>>> ak.maxk(A, 3)
array([7, 9, 10])
>>> ak.maxk(A, 4)
array([5, 7, 9, 10])
arkouda.mean(pda: pdarray) numpy.float64[source]

Return the mean of the array.

Parameters:

pda (pdarray) – Values for which to calculate the mean

Returns:

The mean calculated from the pda sum and size

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

arkouda.merge(left: DataFrame, right: DataFrame, on: str | List[str] | None = None, how: str = 'inner', left_suffix: str = '_x', right_suffix: str = '_y', convert_ints: bool = True, sort: bool = True) DataFrame[source]

Merge Arkouda DataFrames with a database-style join. The resulting dataframe contains rows from both DataFrames as specified by the merge condition (based on the “how” and “on” parameters).

Based on pandas merge functionality. https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html

Parameters:
  • left (DataFrame) – The Left DataFrame to be joined.

  • right (DataFrame) – The Right DataFrame to be joined.

  • on (Optional[Union[str, List[str]]] = None) – The name or list of names of the DataFrame column(s) to join on. If on is None, this defaults to the intersection of the columns in both DataFrames.

  • how (str, default = "inner") – The merge condition. Must be one of “inner”, “left”, “right”, or “outer”.

  • left_suffix (str, default = "_x") – A string indicating the suffix to add to columns from the left dataframe for overlapping column names in both left and right. Defaults to “_x”. Only used when how is “inner”.

  • right_suffix (str, default = "_y") – A string indicating the suffix to add to columns from the right dataframe for overlapping column names in both left and right. Defaults to “_y”. Only used when how is “inner”.

  • convert_ints (bool = True) – If True, convert columns with missing int values (due to the join) to float64. This is to match pandas. If False, do not convert the column dtypes. This has no effect when how = “inner”.

  • sort (bool = True) – If True, DataFrame is returned sorted by “on”. Otherwise, the DataFrame is not sorted.

Returns:

Joined Arkouda DataFrame.

Return type:

arkouda.dataframe.DataFrame

Note

Multiple column joins are only supported for integer columns.

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> from arkouda import merge
>>> left_df = ak.DataFrame({'col1': ak.arange(5), 'col2': -1 * ak.arange(5)})
>>> display(left_df)

col1

col2

0

0

0

1

1

-1

2

2

-2

3

3

-3

4

4

-4

>>> right_df = ak.DataFrame({'col1': 2 * ak.arange(5), 'col2': 2 * ak.arange(5)})
>>> display(right_df)

col1

col2

0

0

0

1

2

2

2

4

4

3

6

6

4

8

8

>>> merge(left_df, right_df, on = "col1")

col1

col2_x

col2_y

0

0

0

0

1

2

-2

2

2

4

-4

4

>>> merge(left_df, right_df, on = "col1", how = "left")

col1

col2_y

col2_x

0

0

0

0

1

1

nan

-1

2

2

2

-2

3

3

nan

-3

4

4

4

-4

>>> merge(left_df, right_df, on = "col1", how = "right")

col1

col2_x

col2_y

0

0

0

0

1

2

-2

2

2

4

-4

4

3

6

nan

6

4

8

nan

8

>>> merge(left_df, right_df, on = "col1", how = "outer")

col1

col2_y

col2_x

0

0

0

0

1

1

nan

-1

2

2

2

-2

3

3

nan

-3

4

4

4

-4

5

6

6

nan

6

8

8

nan

arkouda.min(pda: pdarray) arkouda.dtypes.numpy_scalars[source]

Return the minimum value of the array.

Parameters:

pda (pdarray) – Values for which to calculate the min

Returns:

The min calculated from the pda

Return type:

numpy_scalars

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

arkouda.mink(pda: pdarray, k: arkouda.dtypes.int_scalars) pdarray[source]

Find the k minimum values of an array.

Returns the smallest k values of an array, sorted

Parameters:
  • pda (pdarray) – Input array.

  • k (int_scalars) – The desired count of minimum values to be returned by the output.

Returns:

The minimum k values from pda, sorted

Return type:

pdarray

Raises:
  • TypeError – Raised if pda is not a pdarray

  • ValueError – Raised if the pda is empty or k < 1

Notes

This call is equivalent in value to:

a[ak.argsort(a)[:k]]

and generally outperforms this operation.

This reduction will see a significant drop in performance as k grows beyond a certain value. This value is system dependent, but generally about a k of 5 million is where performance degredation has been observed.

Examples

>>> A = ak.array([10,5,1,3,7,2,9,0])
>>> ak.mink(A, 3)
array([0, 1, 2])
>>> ak.mink(A, 4)
array([0, 1, 2, 3])
arkouda.mod(dividend, divisor) pdarray[source]

Returns the element-wise remainder of division.

Computes the remainder complementary to the floor_divide function. It is equivalent to np.mod, the remainder has the same sign as the divisor.

Parameters:
  • dividend – The array being acted on by the bases for the modular division.

  • divisor – The array that will be the bases for the modular division.

Returns:

Returns an array that contains the element-wise remainder of division.

Return type:

pdarray

arkouda.numeric_scalars
arkouda.numpy_scalars
arkouda.ones(size: arkouda.dtypes.int_scalars | str, dtype: numpy.dtype | type | str | arkouda.dtypes.BigInt = float64, max_bits: int | None = None) arkouda.pdarrayclass.pdarray[source]

Create a pdarray filled with ones.

Parameters:
  • size (int_scalars) – Size of the array (only rank-1 arrays supported)

  • dtype (Union[float64, int64, bool]) – Resulting array type, default float64

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Ones of the requested size and dtype

Return type:

pdarray

Raises:

TypeError – Raised if the supplied dtype is not supported or if the size parameter is neither an int nor a str that is parseable to an int.

See also

zeros, ones_like

Examples

>>> ak.ones(5, dtype=ak.int64)
array([1, 1, 1, 1, 1])
>>> ak.ones(5, dtype=ak.float64)
array([1, 1, 1, 1, 1])
>>> ak.ones(5, dtype=ak.bool)
array([True, True, True, True, True])
arkouda.ones(size: arkouda.dtypes.int_scalars | str, dtype: numpy.dtype | type | str | arkouda.dtypes.BigInt = float64, max_bits: int | None = None) arkouda.pdarrayclass.pdarray[source]

Create a pdarray filled with ones.

Parameters:
  • size (int_scalars) – Size of the array (only rank-1 arrays supported)

  • dtype (Union[float64, int64, bool]) – Resulting array type, default float64

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Ones of the requested size and dtype

Return type:

pdarray

Raises:

TypeError – Raised if the supplied dtype is not supported or if the size parameter is neither an int nor a str that is parseable to an int.

See also

zeros, ones_like

Examples

>>> ak.ones(5, dtype=ak.int64)
array([1, 1, 1, 1, 1])
>>> ak.ones(5, dtype=ak.float64)
array([1, 1, 1, 1, 1])
>>> ak.ones(5, dtype=ak.bool)
array([True, True, True, True, True])
arkouda.ones(size: arkouda.dtypes.int_scalars | str, dtype: numpy.dtype | type | str | arkouda.dtypes.BigInt = float64, max_bits: int | None = None) arkouda.pdarrayclass.pdarray[source]

Create a pdarray filled with ones.

Parameters:
  • size (int_scalars) – Size of the array (only rank-1 arrays supported)

  • dtype (Union[float64, int64, bool]) – Resulting array type, default float64

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Ones of the requested size and dtype

Return type:

pdarray

Raises:

TypeError – Raised if the supplied dtype is not supported or if the size parameter is neither an int nor a str that is parseable to an int.

See also

zeros, ones_like

Examples

>>> ak.ones(5, dtype=ak.int64)
array([1, 1, 1, 1, 1])
>>> ak.ones(5, dtype=ak.float64)
array([1, 1, 1, 1, 1])
>>> ak.ones(5, dtype=ak.bool)
array([True, True, True, True, True])
arkouda.ones(size: arkouda.dtypes.int_scalars | str, dtype: numpy.dtype | type | str | arkouda.dtypes.BigInt = float64, max_bits: int | None = None) arkouda.pdarrayclass.pdarray[source]

Create a pdarray filled with ones.

Parameters:
  • size (int_scalars) – Size of the array (only rank-1 arrays supported)

  • dtype (Union[float64, int64, bool]) – Resulting array type, default float64

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Ones of the requested size and dtype

Return type:

pdarray

Raises:

TypeError – Raised if the supplied dtype is not supported or if the size parameter is neither an int nor a str that is parseable to an int.

See also

zeros, ones_like

Examples

>>> ak.ones(5, dtype=ak.int64)
array([1, 1, 1, 1, 1])
>>> ak.ones(5, dtype=ak.float64)
array([1, 1, 1, 1, 1])
>>> ak.ones(5, dtype=ak.bool)
array([True, True, True, True, True])
arkouda.ones_like(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Create a one-filled pdarray of the same size and dtype as an existing pdarray.

Parameters:

pda (pdarray) – Array to use for size and dtype

Returns:

Equivalent to ak.ones(pda.size, pda.dtype)

Return type:

pdarray

Raises:

TypeError – Raised if the pda parameter is not a pdarray.

See also

ones, zeros_like

Notes

Logic for generating the pdarray is delegated to the ak.ones method. Accordingly, the supported dtypes match are defined by the ak.ones method.

Examples

>>> ones = ak.ones(5, dtype=ak.int64)
 >>> ak.ones_like(ones)
array([1, 1, 1, 1, 1])
>>> ones = ak.ones(5, dtype=ak.float64)
>>> ak.ones_like(ones)
array([1, 1, 1, 1, 1])
>>> ones = ak.ones(5, dtype=ak.bool)
>>> ak.ones_like(ones)
array([True, True, True, True, True])
arkouda.parity(pda: pdarray) pdarray[source]

Find the bit parity (XOR of all bits) for each integer in an array.

Parameters:

pda (pdarray, int64, uint64, bigint) – Input array (must be integral).

Returns:

parity – The parity of each element: 0 if even number of bits set, 1 if odd.

Return type:

pdarray

Raises:

TypeError – If input array is not int64, uint64, or bigint

Examples

>>> A = ak.arange(10)
>>> ak.parity(A)
array([0, 1, 1, 0, 1, 0, 0, 1, 1, 0])
class arkouda.pdarray(name: str, mydtype: numpy.dtype | str, size: arkouda.dtypes.int_scalars, ndim: arkouda.dtypes.int_scalars, shape: Sequence[int], itemsize: arkouda.dtypes.int_scalars, max_bits: int | None = None)[source]

The basic arkouda array class. This class contains only the attributies of the array; the data resides on the arkouda server. When a server operation results in a new array, arkouda will create a pdarray instance that points to the array data on the server. As such, the user should not initialize pdarray instances directly.

name

The server-side identifier for the array

Type:

str

dtype

The element type of the array

Type:

dtype

size

The number of elements in the array

Type:

int_scalars

ndim

The rank of the array (currently only rank 1 arrays supported)

Type:

int_scalars

shape

A list or tuple containing the sizes of each dimension of the array

Type:

Sequence[int]

itemsize

The size in bytes of each element

Type:

int_scalars

property max_bits
property nbytes

The size of the pdarray in bytes.

Returns:

The size of the pdarray in bytes.

Return type:

int

BinOps
OpEqOps
objType = 'pdarray'
all() numpy.bool_[source]

Return True iff all elements of the array evaluate to True.

any() numpy.bool_[source]

Return True iff any element of the array evaluates to True.

argmax() numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array max value.

argmaxk(k: arkouda.dtypes.int_scalars) pdarray[source]

Finds the indices corresponding to the maximum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

Indices corresponding to the maximum k values, sorted

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

argmin() numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array min value

argmink(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the minimum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

Indices corresponding to the maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

astype(dtype) pdarray[source]

Cast values of pdarray to provided dtype

Parameters:

dtype (np.dtype or str) – Dtype to cast to

Returns:

An arkouda pdarray with values converted to the specified data type

Return type:

ak.pdarray

Notes

This is essentially shorthand for ak.cast(x, ‘<dtype>’) where x is a pdarray.

static attach(user_defined_name: str) pdarray[source]

class method to return a pdarray attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which array was registered under

Returns:

pdarray which is bound to the corresponding server side component which was registered with user_defined_name

Return type:

pdarray

Raises:

TypeError – Raised if user_defined_name is not a str

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
bigint_to_uint_arrays() List[pdarray][source]

Creates a list of uint pdarrays from a bigint pdarray. The first item in return will be the highest 64 bits of the bigint pdarray and the last item will be the lowest 64 bits.

Returns:

A list of uint pdarrays where: The first item in return will be the highest 64 bits of the bigint pdarray and the last item will be the lowest 64 bits.

Return type:

List[pdarrays]

Raises:

RuntimeError – Raised if there is a server-side error thrown

Examples

>>> a = ak.arange(2**64, 2**64 + 5)
>>> a
array(["18446744073709551616" "18446744073709551617" "18446744073709551618"
"18446744073709551619" "18446744073709551620"])
>>> a.bigint_to_uint_arrays()
[array([1 1 1 1 1]), array([0 1 2 3 4])]
clz() pdarray[source]

Count the number of leading zeros in each element. See ak.clz.

corr(y: pdarray) numpy.float64[source]

Compute the correlation between self and y using pearson correlation coefficient.

Parameters:

y (pdarray) – Other pdarray used to calculate correlation

Returns:

The scalar correlation of the two arrays

Return type:

np.float64

Raises:
  • TypeError – Raised if y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

cov(y: pdarray) numpy.float64[source]

Compute the covariance between self and y.

Parameters:

y (pdarray) – Other pdarray used to calculate covariance

Returns:

The scalar covariance of the two arrays

Return type:

np.float64

Raises:
  • TypeError – Raised if y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

ctz() pdarray[source]

Count the number of trailing zeros in each element. See ak.ctz.

fill(value: arkouda.dtypes.numeric_scalars) None[source]

Fill the array (in place) with a constant value.

Parameters:

value (numeric_scalars)

Raises:

TypeError – Raised if value is not an int, int64, float, or float64

format_other(other) str[source]

Attempt to cast scalar other to the element dtype of this pdarray, and print the resulting value to a string (e.g. for sending to a server command). The user should not call this function directly.

Parameters:

other (object) – The scalar to be cast to the pdarray.dtype

Return type:

string representation of np.dtype corresponding to the other parameter

Raises:

TypeError – Raised if the other parameter cannot be converted to Numpy dtype

info() str[source]

Returns a JSON formatted string containing information about all components of self

Parameters:

None

Returns:

JSON string containing information about all components of self

Return type:

str

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry

Parameters:

None

Returns:

Indicates if the object is contained in the registry

Return type:

bool

Raises:

RuntimeError – Raised if there’s a server-side error thrown

Note

This will return True if the object is registered itself or as a component of another object

is_sorted() numpy.bool_[source]

Return True iff the array is monotonically non-decreasing.

Parameters:

None

Returns:

Indicates if the array is monotonically non-decreasing

Return type:

bool

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

max() arkouda.dtypes.numpy_scalars[source]

Return the maximum value of the array.

maxk(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the maximum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

The maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

mean() numpy.float64[source]

Return the mean of the array.

min() arkouda.dtypes.numpy_scalars[source]

Return the minimum value of the array.

mink(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the minimum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

The maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

opeq(other, op)[source]
parity() pdarray[source]

Find the parity (XOR of all bits) in each element. See ak.parity.

popcount() pdarray[source]

Find the population (number of bits set) in each element. See ak.popcount.

pretty_print_info() None[source]

Prints information about all components of self in a human readable format

Parameters:

None

Return type:

None

prod() numpy.float64[source]

Return the product of all elements in the array. Return value is always a np.float64 or np.int64.

register(user_defined_name: str) pdarray[source]

Register this pdarray with a user defined name in the arkouda server so it can be attached to later using pdarray.attach() This is an in-place operation, registering a pdarray more than once will update the name in the registry and remove the previously registered name. A name can only be registered to one pdarray at a time.

Parameters:

user_defined_name (str) – user defined name array is to be registered under

Returns:

The same pdarray which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different pdarrays with the same name.

Return type:

pdarray

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the pdarray with the user_defined_name If the user is attempting to register more than one pdarray with the same name, the former should be unregistered first to free up the registration name.

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
reshape(*shape, order='row_major')[source]

Gives a new shape to an array without changing its data.

Parameters:
  • shape (int, tuple of ints, or pdarray) – The new shape should be compatible with the original shape.

  • order (str {'row_major' | 'C' | 'column_major' | 'F'}) – Read the elements of the pdarray in this index order By default, read the elements in row_major or C-like order where the last index changes the fastest If ‘column_major’ or ‘F’, read the elements in column_major or Fortran-like order where the first index changes the fastest

Returns:

An arrayview object with the data from the array but with the new shape

Return type:

ArrayView

rotl(other) pdarray[source]

Rotate bits left by <other>.

rotr(other) pdarray[source]

Rotate bits right by <other>.

save(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', compression: str | None = None, file_format: str = 'HDF5', file_type: str = 'distribute') str[source]

DEPRECATED Save the pdarray to HDF5 or Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. HDF5 support single files, in which case the file name will only be that provided. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:
  • compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

  • file_format (str {'HDF5', 'Parquet'}) – By default, saved files will be written to the HDF5 file format. If ‘Parquet’, the files will be written to the Parquet file format. This is case insensitive.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:
  • RuntimeError – Raised if a server-side error is thrown saving the pdarray

  • ValueError – Raised if there is an error in parsing the prefix path pointing to file write location or if the mode parameter is neither truncate nor append

  • TypeError – Raised if any one of the prefix_path, dataset, or mode parameters is not a string

Notes

The prefix_path must be visible to the arkouda server and the user must have write permission. Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales. If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. Previously all files saved in Parquet format were saved with a .parquet file extension. This will require you to use load as if you saved the file with the extension. Try this if an older file is not being found. Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.save('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.save('path/prefix.h5', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.h5`` where #### is replaced by each locale number
>>> # Saving with an extension (Parquet)
>>> a.save('path/prefix.parquet', dataset='array', file_format='Parquet')
Saves the array in numLocales Parquet files with the name
``cwd/path/name_prefix_LOCALE####.parquet`` where #### is replaced by each locale number
slice_bits(low, high) pdarray[source]

Returns a pdarray containing only bits from low to high of self.

This is zero indexed and inclusive on both ends, so slicing the bottom 64 bits is pda.slice_bits(0, 63)

Parameters:
  • low (int) – The lowest bit included in the slice (inclusive) zero indexed, so the first bit is 0

  • high (int) – The highest bit included in the slice (inclusive)

Returns:

A new pdarray containing the bits of self from low to high

Return type:

pdarray

Raises:

RuntimeError – Raised if there is a server-side error thrown

Examples

>>> p = ak.array([2**65 + (2**64 - 1)])
>>> bin(p[0])
'0b101111111111111111111111111111111111111111111111111111111111111111'
>>> bin(p.slice_bits(64, 65)[0])
'0b10'
std(ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Compute the standard deviation. See arkouda.std for details.

Parameters:

ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

The scalar standard deviation of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

sum() arkouda.dtypes.numeric_and_bool_scalars[source]

Return the sum of all elements in the array.

to_csv(prefix_path: str, dataset: str = 'array', col_delim: str = ',', overwrite: bool = False)[source]

Write pdarray to CSV file(s). File will contain a single column with the pdarray data. All CSV Files written by Arkouda include a header denoting data types of the columns.

prefix_path: str

The filename prefix to be used for saving files. Files will have _LOCALE#### appended when they are written to disk.

dataset: str

Column name to save the pdarray under. Defaults to “array”.

col_delim: str

Defaults to “,”. Value to be used to separate columns within the file. Please be sure that the value used DOES NOT appear in your dataset.

overwrite: bool

Defaults to False. If True, any existing files matching your provided prefix_path will be overwritten. If False, an error will be returned if existing files are found.

str reponse message

ValueError

Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist

RuntimeError

Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

TypeError

Raised if we receive an unknown arkouda_type returned from the server

  • CSV format is not currently supported by load/load_all operations

  • The column delimiter is expected to be the same for column names and data

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (`

`) at this time.

to_cuda()[source]

Convert the array to a Numba DeviceND array, transferring array data from the arkouda server to Python via ndarray. If the array exceeds a builtin size limit, a RuntimeError is raised.

Returns:

A Numba ndarray with the same attributes and data as the pdarray; on GPU

Return type:

numba.DeviceNDArray

Raises:
  • ImportError – Raised if CUDA is not available

  • ModuleNotFoundError – Raised if Numba is either not installed or not enabled

  • RuntimeError – Raised if there is a server-side error thrown in the course of retrieving the pdarray.

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_cuda()
array([0, 1, 2, 3, 4])
>>> type(a.to_cuda())
numpy.devicendarray
to_hdf(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', file_type: str = 'distribute') str[source]

Save the pdarray to HDF5. The object can be saved to a collection of files or single file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. Otherwise, the file name will be prefix_path. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.to_hdf('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.to_hdf('path/prefix.h5', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.h5`` where #### is replaced by each locale number
>>> # Saving to a single file
>>> a.to_hdf('path/prefix.hdf5', dataset='array', file_type='single')
Saves the array in to single hdf5 file on the root node.
``cwd/path/name_prefix.hdf5``
to_list() List[source]

Convert the array to a list, transferring array data from the Arkouda server to client-side Python. Note: if the pdarray size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A list with the same data as the pdarray

Return type:

list

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the pdarray size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

to_ndarray

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_list()
[0, 1, 2, 3, 4]
>>> type(a.to_list())
list
to_ndarray() numpy.ndarray[source]

Convert the array to a np.ndarray, transferring array data from the Arkouda server to client-side Python. Note: if the pdarray size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A numpy ndarray with the same attributes and data as the pdarray

Return type:

np.ndarray

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the pdarray size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array, to_list

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_ndarray()
array([0, 1, 2, 3, 4])
>>> type(a.to_ndarray())
numpy.ndarray
to_parquet(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', compression: str | None = None) str[source]

Save the pdarray to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - ‘append’ write mode is supported, but is not efficient. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.to_parquet('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.to_parqet('path/prefix.parquet', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.parquet`` where #### is replaced by each locale number
transfer(hostname: str, port: arkouda.dtypes.int_scalars)[source]

Sends a pdarray to a different Arkouda server

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the pdarray is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Return type:

A message indicating a complete transfer

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unregister() None[source]

Unregister a pdarray in the arkouda server which was previously registered using register() and/or attahced to using attach()

Return type:

None

Raises:

RuntimeError – Raised if the server could not find the internal name/symbol to remove

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
update_hdf(prefix_path: str, dataset: str = 'array', repack: bool = True)[source]

Overwrite the dataset with the name provided with this pdarray. If the dataset does not exist it is added

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

str - success message if successful

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

value_counts()[source]

Count the occurrences of the unique values of self.

Returns:

  • unique_values (pdarray) – The unique values, sorted in ascending order

  • counts (pdarray, int64) – The number of times the corresponding unique value occurs

Examples

>>> ak.array([2, 0, 2, 4, 0, 0]).value_counts()
(array([0, 2, 4]), array([3, 2, 1]))
var(ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Compute the variance. See arkouda.var for details.

Parameters:

ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating var

Returns:

The scalar variance of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • ValueError – Raised if the ddof >= pdarray size

  • RuntimeError – Raised if there’s a server-side error thrown

class arkouda.pdarray(name: str, mydtype: numpy.dtype | str, size: arkouda.dtypes.int_scalars, ndim: arkouda.dtypes.int_scalars, shape: Sequence[int], itemsize: arkouda.dtypes.int_scalars, max_bits: int | None = None)[source]

The basic arkouda array class. This class contains only the attributies of the array; the data resides on the arkouda server. When a server operation results in a new array, arkouda will create a pdarray instance that points to the array data on the server. As such, the user should not initialize pdarray instances directly.

name

The server-side identifier for the array

Type:

str

dtype

The element type of the array

Type:

dtype

size

The number of elements in the array

Type:

int_scalars

ndim

The rank of the array (currently only rank 1 arrays supported)

Type:

int_scalars

shape

A list or tuple containing the sizes of each dimension of the array

Type:

Sequence[int]

itemsize

The size in bytes of each element

Type:

int_scalars

property max_bits
property nbytes

The size of the pdarray in bytes.

Returns:

The size of the pdarray in bytes.

Return type:

int

BinOps
OpEqOps
objType = 'pdarray'
all() numpy.bool_[source]

Return True iff all elements of the array evaluate to True.

any() numpy.bool_[source]

Return True iff any element of the array evaluates to True.

argmax() numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array max value.

argmaxk(k: arkouda.dtypes.int_scalars) pdarray[source]

Finds the indices corresponding to the maximum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

Indices corresponding to the maximum k values, sorted

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

argmin() numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array min value

argmink(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the minimum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

Indices corresponding to the maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

astype(dtype) pdarray[source]

Cast values of pdarray to provided dtype

Parameters:

dtype (np.dtype or str) – Dtype to cast to

Returns:

An arkouda pdarray with values converted to the specified data type

Return type:

ak.pdarray

Notes

This is essentially shorthand for ak.cast(x, ‘<dtype>’) where x is a pdarray.

static attach(user_defined_name: str) pdarray[source]

class method to return a pdarray attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which array was registered under

Returns:

pdarray which is bound to the corresponding server side component which was registered with user_defined_name

Return type:

pdarray

Raises:

TypeError – Raised if user_defined_name is not a str

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
bigint_to_uint_arrays() List[pdarray][source]

Creates a list of uint pdarrays from a bigint pdarray. The first item in return will be the highest 64 bits of the bigint pdarray and the last item will be the lowest 64 bits.

Returns:

A list of uint pdarrays where: The first item in return will be the highest 64 bits of the bigint pdarray and the last item will be the lowest 64 bits.

Return type:

List[pdarrays]

Raises:

RuntimeError – Raised if there is a server-side error thrown

Examples

>>> a = ak.arange(2**64, 2**64 + 5)
>>> a
array(["18446744073709551616" "18446744073709551617" "18446744073709551618"
"18446744073709551619" "18446744073709551620"])
>>> a.bigint_to_uint_arrays()
[array([1 1 1 1 1]), array([0 1 2 3 4])]
clz() pdarray[source]

Count the number of leading zeros in each element. See ak.clz.

corr(y: pdarray) numpy.float64[source]

Compute the correlation between self and y using pearson correlation coefficient.

Parameters:

y (pdarray) – Other pdarray used to calculate correlation

Returns:

The scalar correlation of the two arrays

Return type:

np.float64

Raises:
  • TypeError – Raised if y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

cov(y: pdarray) numpy.float64[source]

Compute the covariance between self and y.

Parameters:

y (pdarray) – Other pdarray used to calculate covariance

Returns:

The scalar covariance of the two arrays

Return type:

np.float64

Raises:
  • TypeError – Raised if y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

ctz() pdarray[source]

Count the number of trailing zeros in each element. See ak.ctz.

fill(value: arkouda.dtypes.numeric_scalars) None[source]

Fill the array (in place) with a constant value.

Parameters:

value (numeric_scalars)

Raises:

TypeError – Raised if value is not an int, int64, float, or float64

format_other(other) str[source]

Attempt to cast scalar other to the element dtype of this pdarray, and print the resulting value to a string (e.g. for sending to a server command). The user should not call this function directly.

Parameters:

other (object) – The scalar to be cast to the pdarray.dtype

Return type:

string representation of np.dtype corresponding to the other parameter

Raises:

TypeError – Raised if the other parameter cannot be converted to Numpy dtype

info() str[source]

Returns a JSON formatted string containing information about all components of self

Parameters:

None

Returns:

JSON string containing information about all components of self

Return type:

str

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry

Parameters:

None

Returns:

Indicates if the object is contained in the registry

Return type:

bool

Raises:

RuntimeError – Raised if there’s a server-side error thrown

Note

This will return True if the object is registered itself or as a component of another object

is_sorted() numpy.bool_[source]

Return True iff the array is monotonically non-decreasing.

Parameters:

None

Returns:

Indicates if the array is monotonically non-decreasing

Return type:

bool

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

max() arkouda.dtypes.numpy_scalars[source]

Return the maximum value of the array.

maxk(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the maximum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

The maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

mean() numpy.float64[source]

Return the mean of the array.

min() arkouda.dtypes.numpy_scalars[source]

Return the minimum value of the array.

mink(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the minimum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

The maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

opeq(other, op)[source]
parity() pdarray[source]

Find the parity (XOR of all bits) in each element. See ak.parity.

popcount() pdarray[source]

Find the population (number of bits set) in each element. See ak.popcount.

pretty_print_info() None[source]

Prints information about all components of self in a human readable format

Parameters:

None

Return type:

None

prod() numpy.float64[source]

Return the product of all elements in the array. Return value is always a np.float64 or np.int64.

register(user_defined_name: str) pdarray[source]

Register this pdarray with a user defined name in the arkouda server so it can be attached to later using pdarray.attach() This is an in-place operation, registering a pdarray more than once will update the name in the registry and remove the previously registered name. A name can only be registered to one pdarray at a time.

Parameters:

user_defined_name (str) – user defined name array is to be registered under

Returns:

The same pdarray which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different pdarrays with the same name.

Return type:

pdarray

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the pdarray with the user_defined_name If the user is attempting to register more than one pdarray with the same name, the former should be unregistered first to free up the registration name.

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
reshape(*shape, order='row_major')[source]

Gives a new shape to an array without changing its data.

Parameters:
  • shape (int, tuple of ints, or pdarray) – The new shape should be compatible with the original shape.

  • order (str {'row_major' | 'C' | 'column_major' | 'F'}) – Read the elements of the pdarray in this index order By default, read the elements in row_major or C-like order where the last index changes the fastest If ‘column_major’ or ‘F’, read the elements in column_major or Fortran-like order where the first index changes the fastest

Returns:

An arrayview object with the data from the array but with the new shape

Return type:

ArrayView

rotl(other) pdarray[source]

Rotate bits left by <other>.

rotr(other) pdarray[source]

Rotate bits right by <other>.

save(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', compression: str | None = None, file_format: str = 'HDF5', file_type: str = 'distribute') str[source]

DEPRECATED Save the pdarray to HDF5 or Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. HDF5 support single files, in which case the file name will only be that provided. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:
  • compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

  • file_format (str {'HDF5', 'Parquet'}) – By default, saved files will be written to the HDF5 file format. If ‘Parquet’, the files will be written to the Parquet file format. This is case insensitive.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:
  • RuntimeError – Raised if a server-side error is thrown saving the pdarray

  • ValueError – Raised if there is an error in parsing the prefix path pointing to file write location or if the mode parameter is neither truncate nor append

  • TypeError – Raised if any one of the prefix_path, dataset, or mode parameters is not a string

Notes

The prefix_path must be visible to the arkouda server and the user must have write permission. Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales. If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. Previously all files saved in Parquet format were saved with a .parquet file extension. This will require you to use load as if you saved the file with the extension. Try this if an older file is not being found. Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.save('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.save('path/prefix.h5', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.h5`` where #### is replaced by each locale number
>>> # Saving with an extension (Parquet)
>>> a.save('path/prefix.parquet', dataset='array', file_format='Parquet')
Saves the array in numLocales Parquet files with the name
``cwd/path/name_prefix_LOCALE####.parquet`` where #### is replaced by each locale number
slice_bits(low, high) pdarray[source]

Returns a pdarray containing only bits from low to high of self.

This is zero indexed and inclusive on both ends, so slicing the bottom 64 bits is pda.slice_bits(0, 63)

Parameters:
  • low (int) – The lowest bit included in the slice (inclusive) zero indexed, so the first bit is 0

  • high (int) – The highest bit included in the slice (inclusive)

Returns:

A new pdarray containing the bits of self from low to high

Return type:

pdarray

Raises:

RuntimeError – Raised if there is a server-side error thrown

Examples

>>> p = ak.array([2**65 + (2**64 - 1)])
>>> bin(p[0])
'0b101111111111111111111111111111111111111111111111111111111111111111'
>>> bin(p.slice_bits(64, 65)[0])
'0b10'
std(ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Compute the standard deviation. See arkouda.std for details.

Parameters:

ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

The scalar standard deviation of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

sum() arkouda.dtypes.numeric_and_bool_scalars[source]

Return the sum of all elements in the array.

to_csv(prefix_path: str, dataset: str = 'array', col_delim: str = ',', overwrite: bool = False)[source]

Write pdarray to CSV file(s). File will contain a single column with the pdarray data. All CSV Files written by Arkouda include a header denoting data types of the columns.

prefix_path: str

The filename prefix to be used for saving files. Files will have _LOCALE#### appended when they are written to disk.

dataset: str

Column name to save the pdarray under. Defaults to “array”.

col_delim: str

Defaults to “,”. Value to be used to separate columns within the file. Please be sure that the value used DOES NOT appear in your dataset.

overwrite: bool

Defaults to False. If True, any existing files matching your provided prefix_path will be overwritten. If False, an error will be returned if existing files are found.

str reponse message

ValueError

Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist

RuntimeError

Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

TypeError

Raised if we receive an unknown arkouda_type returned from the server

  • CSV format is not currently supported by load/load_all operations

  • The column delimiter is expected to be the same for column names and data

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (`

`) at this time.

to_cuda()[source]

Convert the array to a Numba DeviceND array, transferring array data from the arkouda server to Python via ndarray. If the array exceeds a builtin size limit, a RuntimeError is raised.

Returns:

A Numba ndarray with the same attributes and data as the pdarray; on GPU

Return type:

numba.DeviceNDArray

Raises:
  • ImportError – Raised if CUDA is not available

  • ModuleNotFoundError – Raised if Numba is either not installed or not enabled

  • RuntimeError – Raised if there is a server-side error thrown in the course of retrieving the pdarray.

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_cuda()
array([0, 1, 2, 3, 4])
>>> type(a.to_cuda())
numpy.devicendarray
to_hdf(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', file_type: str = 'distribute') str[source]

Save the pdarray to HDF5. The object can be saved to a collection of files or single file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. Otherwise, the file name will be prefix_path. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.to_hdf('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.to_hdf('path/prefix.h5', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.h5`` where #### is replaced by each locale number
>>> # Saving to a single file
>>> a.to_hdf('path/prefix.hdf5', dataset='array', file_type='single')
Saves the array in to single hdf5 file on the root node.
``cwd/path/name_prefix.hdf5``
to_list() List[source]

Convert the array to a list, transferring array data from the Arkouda server to client-side Python. Note: if the pdarray size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A list with the same data as the pdarray

Return type:

list

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the pdarray size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

to_ndarray

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_list()
[0, 1, 2, 3, 4]
>>> type(a.to_list())
list
to_ndarray() numpy.ndarray[source]

Convert the array to a np.ndarray, transferring array data from the Arkouda server to client-side Python. Note: if the pdarray size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A numpy ndarray with the same attributes and data as the pdarray

Return type:

np.ndarray

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the pdarray size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array, to_list

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_ndarray()
array([0, 1, 2, 3, 4])
>>> type(a.to_ndarray())
numpy.ndarray
to_parquet(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', compression: str | None = None) str[source]

Save the pdarray to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - ‘append’ write mode is supported, but is not efficient. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.to_parquet('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.to_parqet('path/prefix.parquet', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.parquet`` where #### is replaced by each locale number
transfer(hostname: str, port: arkouda.dtypes.int_scalars)[source]

Sends a pdarray to a different Arkouda server

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the pdarray is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Return type:

A message indicating a complete transfer

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unregister() None[source]

Unregister a pdarray in the arkouda server which was previously registered using register() and/or attahced to using attach()

Return type:

None

Raises:

RuntimeError – Raised if the server could not find the internal name/symbol to remove

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
update_hdf(prefix_path: str, dataset: str = 'array', repack: bool = True)[source]

Overwrite the dataset with the name provided with this pdarray. If the dataset does not exist it is added

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

str - success message if successful

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

value_counts()[source]

Count the occurrences of the unique values of self.

Returns:

  • unique_values (pdarray) – The unique values, sorted in ascending order

  • counts (pdarray, int64) – The number of times the corresponding unique value occurs

Examples

>>> ak.array([2, 0, 2, 4, 0, 0]).value_counts()
(array([0, 2, 4]), array([3, 2, 1]))
var(ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Compute the variance. See arkouda.var for details.

Parameters:

ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating var

Returns:

The scalar variance of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • ValueError – Raised if the ddof >= pdarray size

  • RuntimeError – Raised if there’s a server-side error thrown

class arkouda.pdarray(name: str, mydtype: numpy.dtype | str, size: arkouda.dtypes.int_scalars, ndim: arkouda.dtypes.int_scalars, shape: Sequence[int], itemsize: arkouda.dtypes.int_scalars, max_bits: int | None = None)[source]

The basic arkouda array class. This class contains only the attributies of the array; the data resides on the arkouda server. When a server operation results in a new array, arkouda will create a pdarray instance that points to the array data on the server. As such, the user should not initialize pdarray instances directly.

name

The server-side identifier for the array

Type:

str

dtype

The element type of the array

Type:

dtype

size

The number of elements in the array

Type:

int_scalars

ndim

The rank of the array (currently only rank 1 arrays supported)

Type:

int_scalars

shape

A list or tuple containing the sizes of each dimension of the array

Type:

Sequence[int]

itemsize

The size in bytes of each element

Type:

int_scalars

property max_bits
property nbytes

The size of the pdarray in bytes.

Returns:

The size of the pdarray in bytes.

Return type:

int

BinOps
OpEqOps
objType = 'pdarray'
all() numpy.bool_[source]

Return True iff all elements of the array evaluate to True.

any() numpy.bool_[source]

Return True iff any element of the array evaluates to True.

argmax() numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array max value.

argmaxk(k: arkouda.dtypes.int_scalars) pdarray[source]

Finds the indices corresponding to the maximum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

Indices corresponding to the maximum k values, sorted

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

argmin() numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array min value

argmink(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the minimum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

Indices corresponding to the maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

astype(dtype) pdarray[source]

Cast values of pdarray to provided dtype

Parameters:

dtype (np.dtype or str) – Dtype to cast to

Returns:

An arkouda pdarray with values converted to the specified data type

Return type:

ak.pdarray

Notes

This is essentially shorthand for ak.cast(x, ‘<dtype>’) where x is a pdarray.

static attach(user_defined_name: str) pdarray[source]

class method to return a pdarray attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which array was registered under

Returns:

pdarray which is bound to the corresponding server side component which was registered with user_defined_name

Return type:

pdarray

Raises:

TypeError – Raised if user_defined_name is not a str

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
bigint_to_uint_arrays() List[pdarray][source]

Creates a list of uint pdarrays from a bigint pdarray. The first item in return will be the highest 64 bits of the bigint pdarray and the last item will be the lowest 64 bits.

Returns:

A list of uint pdarrays where: The first item in return will be the highest 64 bits of the bigint pdarray and the last item will be the lowest 64 bits.

Return type:

List[pdarrays]

Raises:

RuntimeError – Raised if there is a server-side error thrown

Examples

>>> a = ak.arange(2**64, 2**64 + 5)
>>> a
array(["18446744073709551616" "18446744073709551617" "18446744073709551618"
"18446744073709551619" "18446744073709551620"])
>>> a.bigint_to_uint_arrays()
[array([1 1 1 1 1]), array([0 1 2 3 4])]
clz() pdarray[source]

Count the number of leading zeros in each element. See ak.clz.

corr(y: pdarray) numpy.float64[source]

Compute the correlation between self and y using pearson correlation coefficient.

Parameters:

y (pdarray) – Other pdarray used to calculate correlation

Returns:

The scalar correlation of the two arrays

Return type:

np.float64

Raises:
  • TypeError – Raised if y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

cov(y: pdarray) numpy.float64[source]

Compute the covariance between self and y.

Parameters:

y (pdarray) – Other pdarray used to calculate covariance

Returns:

The scalar covariance of the two arrays

Return type:

np.float64

Raises:
  • TypeError – Raised if y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

ctz() pdarray[source]

Count the number of trailing zeros in each element. See ak.ctz.

fill(value: arkouda.dtypes.numeric_scalars) None[source]

Fill the array (in place) with a constant value.

Parameters:

value (numeric_scalars)

Raises:

TypeError – Raised if value is not an int, int64, float, or float64

format_other(other) str[source]

Attempt to cast scalar other to the element dtype of this pdarray, and print the resulting value to a string (e.g. for sending to a server command). The user should not call this function directly.

Parameters:

other (object) – The scalar to be cast to the pdarray.dtype

Return type:

string representation of np.dtype corresponding to the other parameter

Raises:

TypeError – Raised if the other parameter cannot be converted to Numpy dtype

info() str[source]

Returns a JSON formatted string containing information about all components of self

Parameters:

None

Returns:

JSON string containing information about all components of self

Return type:

str

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry

Parameters:

None

Returns:

Indicates if the object is contained in the registry

Return type:

bool

Raises:

RuntimeError – Raised if there’s a server-side error thrown

Note

This will return True if the object is registered itself or as a component of another object

is_sorted() numpy.bool_[source]

Return True iff the array is monotonically non-decreasing.

Parameters:

None

Returns:

Indicates if the array is monotonically non-decreasing

Return type:

bool

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

max() arkouda.dtypes.numpy_scalars[source]

Return the maximum value of the array.

maxk(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the maximum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

The maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

mean() numpy.float64[source]

Return the mean of the array.

min() arkouda.dtypes.numpy_scalars[source]

Return the minimum value of the array.

mink(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the minimum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

The maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

opeq(other, op)[source]
parity() pdarray[source]

Find the parity (XOR of all bits) in each element. See ak.parity.

popcount() pdarray[source]

Find the population (number of bits set) in each element. See ak.popcount.

pretty_print_info() None[source]

Prints information about all components of self in a human readable format

Parameters:

None

Return type:

None

prod() numpy.float64[source]

Return the product of all elements in the array. Return value is always a np.float64 or np.int64.

register(user_defined_name: str) pdarray[source]

Register this pdarray with a user defined name in the arkouda server so it can be attached to later using pdarray.attach() This is an in-place operation, registering a pdarray more than once will update the name in the registry and remove the previously registered name. A name can only be registered to one pdarray at a time.

Parameters:

user_defined_name (str) – user defined name array is to be registered under

Returns:

The same pdarray which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different pdarrays with the same name.

Return type:

pdarray

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the pdarray with the user_defined_name If the user is attempting to register more than one pdarray with the same name, the former should be unregistered first to free up the registration name.

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
reshape(*shape, order='row_major')[source]

Gives a new shape to an array without changing its data.

Parameters:
  • shape (int, tuple of ints, or pdarray) – The new shape should be compatible with the original shape.

  • order (str {'row_major' | 'C' | 'column_major' | 'F'}) – Read the elements of the pdarray in this index order By default, read the elements in row_major or C-like order where the last index changes the fastest If ‘column_major’ or ‘F’, read the elements in column_major or Fortran-like order where the first index changes the fastest

Returns:

An arrayview object with the data from the array but with the new shape

Return type:

ArrayView

rotl(other) pdarray[source]

Rotate bits left by <other>.

rotr(other) pdarray[source]

Rotate bits right by <other>.

save(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', compression: str | None = None, file_format: str = 'HDF5', file_type: str = 'distribute') str[source]

DEPRECATED Save the pdarray to HDF5 or Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. HDF5 support single files, in which case the file name will only be that provided. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:
  • compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

  • file_format (str {'HDF5', 'Parquet'}) – By default, saved files will be written to the HDF5 file format. If ‘Parquet’, the files will be written to the Parquet file format. This is case insensitive.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:
  • RuntimeError – Raised if a server-side error is thrown saving the pdarray

  • ValueError – Raised if there is an error in parsing the prefix path pointing to file write location or if the mode parameter is neither truncate nor append

  • TypeError – Raised if any one of the prefix_path, dataset, or mode parameters is not a string

Notes

The prefix_path must be visible to the arkouda server and the user must have write permission. Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales. If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. Previously all files saved in Parquet format were saved with a .parquet file extension. This will require you to use load as if you saved the file with the extension. Try this if an older file is not being found. Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.save('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.save('path/prefix.h5', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.h5`` where #### is replaced by each locale number
>>> # Saving with an extension (Parquet)
>>> a.save('path/prefix.parquet', dataset='array', file_format='Parquet')
Saves the array in numLocales Parquet files with the name
``cwd/path/name_prefix_LOCALE####.parquet`` where #### is replaced by each locale number
slice_bits(low, high) pdarray[source]

Returns a pdarray containing only bits from low to high of self.

This is zero indexed and inclusive on both ends, so slicing the bottom 64 bits is pda.slice_bits(0, 63)

Parameters:
  • low (int) – The lowest bit included in the slice (inclusive) zero indexed, so the first bit is 0

  • high (int) – The highest bit included in the slice (inclusive)

Returns:

A new pdarray containing the bits of self from low to high

Return type:

pdarray

Raises:

RuntimeError – Raised if there is a server-side error thrown

Examples

>>> p = ak.array([2**65 + (2**64 - 1)])
>>> bin(p[0])
'0b101111111111111111111111111111111111111111111111111111111111111111'
>>> bin(p.slice_bits(64, 65)[0])
'0b10'
std(ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Compute the standard deviation. See arkouda.std for details.

Parameters:

ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

The scalar standard deviation of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

sum() arkouda.dtypes.numeric_and_bool_scalars[source]

Return the sum of all elements in the array.

to_csv(prefix_path: str, dataset: str = 'array', col_delim: str = ',', overwrite: bool = False)[source]

Write pdarray to CSV file(s). File will contain a single column with the pdarray data. All CSV Files written by Arkouda include a header denoting data types of the columns.

prefix_path: str

The filename prefix to be used for saving files. Files will have _LOCALE#### appended when they are written to disk.

dataset: str

Column name to save the pdarray under. Defaults to “array”.

col_delim: str

Defaults to “,”. Value to be used to separate columns within the file. Please be sure that the value used DOES NOT appear in your dataset.

overwrite: bool

Defaults to False. If True, any existing files matching your provided prefix_path will be overwritten. If False, an error will be returned if existing files are found.

str reponse message

ValueError

Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist

RuntimeError

Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

TypeError

Raised if we receive an unknown arkouda_type returned from the server

  • CSV format is not currently supported by load/load_all operations

  • The column delimiter is expected to be the same for column names and data

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (`

`) at this time.

to_cuda()[source]

Convert the array to a Numba DeviceND array, transferring array data from the arkouda server to Python via ndarray. If the array exceeds a builtin size limit, a RuntimeError is raised.

Returns:

A Numba ndarray with the same attributes and data as the pdarray; on GPU

Return type:

numba.DeviceNDArray

Raises:
  • ImportError – Raised if CUDA is not available

  • ModuleNotFoundError – Raised if Numba is either not installed or not enabled

  • RuntimeError – Raised if there is a server-side error thrown in the course of retrieving the pdarray.

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_cuda()
array([0, 1, 2, 3, 4])
>>> type(a.to_cuda())
numpy.devicendarray
to_hdf(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', file_type: str = 'distribute') str[source]

Save the pdarray to HDF5. The object can be saved to a collection of files or single file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. Otherwise, the file name will be prefix_path. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.to_hdf('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.to_hdf('path/prefix.h5', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.h5`` where #### is replaced by each locale number
>>> # Saving to a single file
>>> a.to_hdf('path/prefix.hdf5', dataset='array', file_type='single')
Saves the array in to single hdf5 file on the root node.
``cwd/path/name_prefix.hdf5``
to_list() List[source]

Convert the array to a list, transferring array data from the Arkouda server to client-side Python. Note: if the pdarray size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A list with the same data as the pdarray

Return type:

list

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the pdarray size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

to_ndarray

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_list()
[0, 1, 2, 3, 4]
>>> type(a.to_list())
list
to_ndarray() numpy.ndarray[source]

Convert the array to a np.ndarray, transferring array data from the Arkouda server to client-side Python. Note: if the pdarray size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A numpy ndarray with the same attributes and data as the pdarray

Return type:

np.ndarray

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the pdarray size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array, to_list

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_ndarray()
array([0, 1, 2, 3, 4])
>>> type(a.to_ndarray())
numpy.ndarray
to_parquet(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', compression: str | None = None) str[source]

Save the pdarray to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - ‘append’ write mode is supported, but is not efficient. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.to_parquet('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.to_parqet('path/prefix.parquet', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.parquet`` where #### is replaced by each locale number
transfer(hostname: str, port: arkouda.dtypes.int_scalars)[source]

Sends a pdarray to a different Arkouda server

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the pdarray is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Return type:

A message indicating a complete transfer

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unregister() None[source]

Unregister a pdarray in the arkouda server which was previously registered using register() and/or attahced to using attach()

Return type:

None

Raises:

RuntimeError – Raised if the server could not find the internal name/symbol to remove

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
update_hdf(prefix_path: str, dataset: str = 'array', repack: bool = True)[source]

Overwrite the dataset with the name provided with this pdarray. If the dataset does not exist it is added

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

str - success message if successful

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

value_counts()[source]

Count the occurrences of the unique values of self.

Returns:

  • unique_values (pdarray) – The unique values, sorted in ascending order

  • counts (pdarray, int64) – The number of times the corresponding unique value occurs

Examples

>>> ak.array([2, 0, 2, 4, 0, 0]).value_counts()
(array([0, 2, 4]), array([3, 2, 1]))
var(ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Compute the variance. See arkouda.var for details.

Parameters:

ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating var

Returns:

The scalar variance of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • ValueError – Raised if the ddof >= pdarray size

  • RuntimeError – Raised if there’s a server-side error thrown

class arkouda.pdarray(name: str, mydtype: numpy.dtype | str, size: arkouda.dtypes.int_scalars, ndim: arkouda.dtypes.int_scalars, shape: Sequence[int], itemsize: arkouda.dtypes.int_scalars, max_bits: int | None = None)[source]

The basic arkouda array class. This class contains only the attributies of the array; the data resides on the arkouda server. When a server operation results in a new array, arkouda will create a pdarray instance that points to the array data on the server. As such, the user should not initialize pdarray instances directly.

name

The server-side identifier for the array

Type:

str

dtype

The element type of the array

Type:

dtype

size

The number of elements in the array

Type:

int_scalars

ndim

The rank of the array (currently only rank 1 arrays supported)

Type:

int_scalars

shape

A list or tuple containing the sizes of each dimension of the array

Type:

Sequence[int]

itemsize

The size in bytes of each element

Type:

int_scalars

property max_bits
property nbytes

The size of the pdarray in bytes.

Returns:

The size of the pdarray in bytes.

Return type:

int

BinOps
OpEqOps
objType = 'pdarray'
all() numpy.bool_[source]

Return True iff all elements of the array evaluate to True.

any() numpy.bool_[source]

Return True iff any element of the array evaluates to True.

argmax() numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array max value.

argmaxk(k: arkouda.dtypes.int_scalars) pdarray[source]

Finds the indices corresponding to the maximum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

Indices corresponding to the maximum k values, sorted

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

argmin() numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array min value

argmink(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the minimum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

Indices corresponding to the maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

astype(dtype) pdarray[source]

Cast values of pdarray to provided dtype

Parameters:

dtype (np.dtype or str) – Dtype to cast to

Returns:

An arkouda pdarray with values converted to the specified data type

Return type:

ak.pdarray

Notes

This is essentially shorthand for ak.cast(x, ‘<dtype>’) where x is a pdarray.

static attach(user_defined_name: str) pdarray[source]

class method to return a pdarray attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which array was registered under

Returns:

pdarray which is bound to the corresponding server side component which was registered with user_defined_name

Return type:

pdarray

Raises:

TypeError – Raised if user_defined_name is not a str

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
bigint_to_uint_arrays() List[pdarray][source]

Creates a list of uint pdarrays from a bigint pdarray. The first item in return will be the highest 64 bits of the bigint pdarray and the last item will be the lowest 64 bits.

Returns:

A list of uint pdarrays where: The first item in return will be the highest 64 bits of the bigint pdarray and the last item will be the lowest 64 bits.

Return type:

List[pdarrays]

Raises:

RuntimeError – Raised if there is a server-side error thrown

Examples

>>> a = ak.arange(2**64, 2**64 + 5)
>>> a
array(["18446744073709551616" "18446744073709551617" "18446744073709551618"
"18446744073709551619" "18446744073709551620"])
>>> a.bigint_to_uint_arrays()
[array([1 1 1 1 1]), array([0 1 2 3 4])]
clz() pdarray[source]

Count the number of leading zeros in each element. See ak.clz.

corr(y: pdarray) numpy.float64[source]

Compute the correlation between self and y using pearson correlation coefficient.

Parameters:

y (pdarray) – Other pdarray used to calculate correlation

Returns:

The scalar correlation of the two arrays

Return type:

np.float64

Raises:
  • TypeError – Raised if y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

cov(y: pdarray) numpy.float64[source]

Compute the covariance between self and y.

Parameters:

y (pdarray) – Other pdarray used to calculate covariance

Returns:

The scalar covariance of the two arrays

Return type:

np.float64

Raises:
  • TypeError – Raised if y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

ctz() pdarray[source]

Count the number of trailing zeros in each element. See ak.ctz.

fill(value: arkouda.dtypes.numeric_scalars) None[source]

Fill the array (in place) with a constant value.

Parameters:

value (numeric_scalars)

Raises:

TypeError – Raised if value is not an int, int64, float, or float64

format_other(other) str[source]

Attempt to cast scalar other to the element dtype of this pdarray, and print the resulting value to a string (e.g. for sending to a server command). The user should not call this function directly.

Parameters:

other (object) – The scalar to be cast to the pdarray.dtype

Return type:

string representation of np.dtype corresponding to the other parameter

Raises:

TypeError – Raised if the other parameter cannot be converted to Numpy dtype

info() str[source]

Returns a JSON formatted string containing information about all components of self

Parameters:

None

Returns:

JSON string containing information about all components of self

Return type:

str

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry

Parameters:

None

Returns:

Indicates if the object is contained in the registry

Return type:

bool

Raises:

RuntimeError – Raised if there’s a server-side error thrown

Note

This will return True if the object is registered itself or as a component of another object

is_sorted() numpy.bool_[source]

Return True iff the array is monotonically non-decreasing.

Parameters:

None

Returns:

Indicates if the array is monotonically non-decreasing

Return type:

bool

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

max() arkouda.dtypes.numpy_scalars[source]

Return the maximum value of the array.

maxk(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the maximum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

The maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

mean() numpy.float64[source]

Return the mean of the array.

min() arkouda.dtypes.numpy_scalars[source]

Return the minimum value of the array.

mink(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the minimum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

The maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

opeq(other, op)[source]
parity() pdarray[source]

Find the parity (XOR of all bits) in each element. See ak.parity.

popcount() pdarray[source]

Find the population (number of bits set) in each element. See ak.popcount.

pretty_print_info() None[source]

Prints information about all components of self in a human readable format

Parameters:

None

Return type:

None

prod() numpy.float64[source]

Return the product of all elements in the array. Return value is always a np.float64 or np.int64.

register(user_defined_name: str) pdarray[source]

Register this pdarray with a user defined name in the arkouda server so it can be attached to later using pdarray.attach() This is an in-place operation, registering a pdarray more than once will update the name in the registry and remove the previously registered name. A name can only be registered to one pdarray at a time.

Parameters:

user_defined_name (str) – user defined name array is to be registered under

Returns:

The same pdarray which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different pdarrays with the same name.

Return type:

pdarray

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the pdarray with the user_defined_name If the user is attempting to register more than one pdarray with the same name, the former should be unregistered first to free up the registration name.

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
reshape(*shape, order='row_major')[source]

Gives a new shape to an array without changing its data.

Parameters:
  • shape (int, tuple of ints, or pdarray) – The new shape should be compatible with the original shape.

  • order (str {'row_major' | 'C' | 'column_major' | 'F'}) – Read the elements of the pdarray in this index order By default, read the elements in row_major or C-like order where the last index changes the fastest If ‘column_major’ or ‘F’, read the elements in column_major or Fortran-like order where the first index changes the fastest

Returns:

An arrayview object with the data from the array but with the new shape

Return type:

ArrayView

rotl(other) pdarray[source]

Rotate bits left by <other>.

rotr(other) pdarray[source]

Rotate bits right by <other>.

save(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', compression: str | None = None, file_format: str = 'HDF5', file_type: str = 'distribute') str[source]

DEPRECATED Save the pdarray to HDF5 or Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. HDF5 support single files, in which case the file name will only be that provided. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:
  • compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

  • file_format (str {'HDF5', 'Parquet'}) – By default, saved files will be written to the HDF5 file format. If ‘Parquet’, the files will be written to the Parquet file format. This is case insensitive.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:
  • RuntimeError – Raised if a server-side error is thrown saving the pdarray

  • ValueError – Raised if there is an error in parsing the prefix path pointing to file write location or if the mode parameter is neither truncate nor append

  • TypeError – Raised if any one of the prefix_path, dataset, or mode parameters is not a string

Notes

The prefix_path must be visible to the arkouda server and the user must have write permission. Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales. If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. Previously all files saved in Parquet format were saved with a .parquet file extension. This will require you to use load as if you saved the file with the extension. Try this if an older file is not being found. Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.save('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.save('path/prefix.h5', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.h5`` where #### is replaced by each locale number
>>> # Saving with an extension (Parquet)
>>> a.save('path/prefix.parquet', dataset='array', file_format='Parquet')
Saves the array in numLocales Parquet files with the name
``cwd/path/name_prefix_LOCALE####.parquet`` where #### is replaced by each locale number
slice_bits(low, high) pdarray[source]

Returns a pdarray containing only bits from low to high of self.

This is zero indexed and inclusive on both ends, so slicing the bottom 64 bits is pda.slice_bits(0, 63)

Parameters:
  • low (int) – The lowest bit included in the slice (inclusive) zero indexed, so the first bit is 0

  • high (int) – The highest bit included in the slice (inclusive)

Returns:

A new pdarray containing the bits of self from low to high

Return type:

pdarray

Raises:

RuntimeError – Raised if there is a server-side error thrown

Examples

>>> p = ak.array([2**65 + (2**64 - 1)])
>>> bin(p[0])
'0b101111111111111111111111111111111111111111111111111111111111111111'
>>> bin(p.slice_bits(64, 65)[0])
'0b10'
std(ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Compute the standard deviation. See arkouda.std for details.

Parameters:

ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

The scalar standard deviation of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

sum() arkouda.dtypes.numeric_and_bool_scalars[source]

Return the sum of all elements in the array.

to_csv(prefix_path: str, dataset: str = 'array', col_delim: str = ',', overwrite: bool = False)[source]

Write pdarray to CSV file(s). File will contain a single column with the pdarray data. All CSV Files written by Arkouda include a header denoting data types of the columns.

prefix_path: str

The filename prefix to be used for saving files. Files will have _LOCALE#### appended when they are written to disk.

dataset: str

Column name to save the pdarray under. Defaults to “array”.

col_delim: str

Defaults to “,”. Value to be used to separate columns within the file. Please be sure that the value used DOES NOT appear in your dataset.

overwrite: bool

Defaults to False. If True, any existing files matching your provided prefix_path will be overwritten. If False, an error will be returned if existing files are found.

str reponse message

ValueError

Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist

RuntimeError

Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

TypeError

Raised if we receive an unknown arkouda_type returned from the server

  • CSV format is not currently supported by load/load_all operations

  • The column delimiter is expected to be the same for column names and data

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (`

`) at this time.

to_cuda()[source]

Convert the array to a Numba DeviceND array, transferring array data from the arkouda server to Python via ndarray. If the array exceeds a builtin size limit, a RuntimeError is raised.

Returns:

A Numba ndarray with the same attributes and data as the pdarray; on GPU

Return type:

numba.DeviceNDArray

Raises:
  • ImportError – Raised if CUDA is not available

  • ModuleNotFoundError – Raised if Numba is either not installed or not enabled

  • RuntimeError – Raised if there is a server-side error thrown in the course of retrieving the pdarray.

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_cuda()
array([0, 1, 2, 3, 4])
>>> type(a.to_cuda())
numpy.devicendarray
to_hdf(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', file_type: str = 'distribute') str[source]

Save the pdarray to HDF5. The object can be saved to a collection of files or single file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. Otherwise, the file name will be prefix_path. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.to_hdf('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.to_hdf('path/prefix.h5', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.h5`` where #### is replaced by each locale number
>>> # Saving to a single file
>>> a.to_hdf('path/prefix.hdf5', dataset='array', file_type='single')
Saves the array in to single hdf5 file on the root node.
``cwd/path/name_prefix.hdf5``
to_list() List[source]

Convert the array to a list, transferring array data from the Arkouda server to client-side Python. Note: if the pdarray size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A list with the same data as the pdarray

Return type:

list

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the pdarray size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

to_ndarray

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_list()
[0, 1, 2, 3, 4]
>>> type(a.to_list())
list
to_ndarray() numpy.ndarray[source]

Convert the array to a np.ndarray, transferring array data from the Arkouda server to client-side Python. Note: if the pdarray size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A numpy ndarray with the same attributes and data as the pdarray

Return type:

np.ndarray

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the pdarray size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array, to_list

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_ndarray()
array([0, 1, 2, 3, 4])
>>> type(a.to_ndarray())
numpy.ndarray
to_parquet(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', compression: str | None = None) str[source]

Save the pdarray to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - ‘append’ write mode is supported, but is not efficient. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.to_parquet('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.to_parqet('path/prefix.parquet', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.parquet`` where #### is replaced by each locale number
transfer(hostname: str, port: arkouda.dtypes.int_scalars)[source]

Sends a pdarray to a different Arkouda server

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the pdarray is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Return type:

A message indicating a complete transfer

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unregister() None[source]

Unregister a pdarray in the arkouda server which was previously registered using register() and/or attahced to using attach()

Return type:

None

Raises:

RuntimeError – Raised if the server could not find the internal name/symbol to remove

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
update_hdf(prefix_path: str, dataset: str = 'array', repack: bool = True)[source]

Overwrite the dataset with the name provided with this pdarray. If the dataset does not exist it is added

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

str - success message if successful

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

value_counts()[source]

Count the occurrences of the unique values of self.

Returns:

  • unique_values (pdarray) – The unique values, sorted in ascending order

  • counts (pdarray, int64) – The number of times the corresponding unique value occurs

Examples

>>> ak.array([2, 0, 2, 4, 0, 0]).value_counts()
(array([0, 2, 4]), array([3, 2, 1]))
var(ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Compute the variance. See arkouda.var for details.

Parameters:

ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating var

Returns:

The scalar variance of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • ValueError – Raised if the ddof >= pdarray size

  • RuntimeError – Raised if there’s a server-side error thrown

class arkouda.pdarray(name: str, mydtype: numpy.dtype | str, size: arkouda.dtypes.int_scalars, ndim: arkouda.dtypes.int_scalars, shape: Sequence[int], itemsize: arkouda.dtypes.int_scalars, max_bits: int | None = None)[source]

The basic arkouda array class. This class contains only the attributies of the array; the data resides on the arkouda server. When a server operation results in a new array, arkouda will create a pdarray instance that points to the array data on the server. As such, the user should not initialize pdarray instances directly.

name

The server-side identifier for the array

Type:

str

dtype

The element type of the array

Type:

dtype

size

The number of elements in the array

Type:

int_scalars

ndim

The rank of the array (currently only rank 1 arrays supported)

Type:

int_scalars

shape

A list or tuple containing the sizes of each dimension of the array

Type:

Sequence[int]

itemsize

The size in bytes of each element

Type:

int_scalars

property max_bits
property nbytes

The size of the pdarray in bytes.

Returns:

The size of the pdarray in bytes.

Return type:

int

BinOps
OpEqOps
objType = 'pdarray'
all() numpy.bool_[source]

Return True iff all elements of the array evaluate to True.

any() numpy.bool_[source]

Return True iff any element of the array evaluates to True.

argmax() numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array max value.

argmaxk(k: arkouda.dtypes.int_scalars) pdarray[source]

Finds the indices corresponding to the maximum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

Indices corresponding to the maximum k values, sorted

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

argmin() numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array min value

argmink(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the minimum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

Indices corresponding to the maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

astype(dtype) pdarray[source]

Cast values of pdarray to provided dtype

Parameters:

dtype (np.dtype or str) – Dtype to cast to

Returns:

An arkouda pdarray with values converted to the specified data type

Return type:

ak.pdarray

Notes

This is essentially shorthand for ak.cast(x, ‘<dtype>’) where x is a pdarray.

static attach(user_defined_name: str) pdarray[source]

class method to return a pdarray attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which array was registered under

Returns:

pdarray which is bound to the corresponding server side component which was registered with user_defined_name

Return type:

pdarray

Raises:

TypeError – Raised if user_defined_name is not a str

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
bigint_to_uint_arrays() List[pdarray][source]

Creates a list of uint pdarrays from a bigint pdarray. The first item in return will be the highest 64 bits of the bigint pdarray and the last item will be the lowest 64 bits.

Returns:

A list of uint pdarrays where: The first item in return will be the highest 64 bits of the bigint pdarray and the last item will be the lowest 64 bits.

Return type:

List[pdarrays]

Raises:

RuntimeError – Raised if there is a server-side error thrown

Examples

>>> a = ak.arange(2**64, 2**64 + 5)
>>> a
array(["18446744073709551616" "18446744073709551617" "18446744073709551618"
"18446744073709551619" "18446744073709551620"])
>>> a.bigint_to_uint_arrays()
[array([1 1 1 1 1]), array([0 1 2 3 4])]
clz() pdarray[source]

Count the number of leading zeros in each element. See ak.clz.

corr(y: pdarray) numpy.float64[source]

Compute the correlation between self and y using pearson correlation coefficient.

Parameters:

y (pdarray) – Other pdarray used to calculate correlation

Returns:

The scalar correlation of the two arrays

Return type:

np.float64

Raises:
  • TypeError – Raised if y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

cov(y: pdarray) numpy.float64[source]

Compute the covariance between self and y.

Parameters:

y (pdarray) – Other pdarray used to calculate covariance

Returns:

The scalar covariance of the two arrays

Return type:

np.float64

Raises:
  • TypeError – Raised if y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

ctz() pdarray[source]

Count the number of trailing zeros in each element. See ak.ctz.

fill(value: arkouda.dtypes.numeric_scalars) None[source]

Fill the array (in place) with a constant value.

Parameters:

value (numeric_scalars)

Raises:

TypeError – Raised if value is not an int, int64, float, or float64

format_other(other) str[source]

Attempt to cast scalar other to the element dtype of this pdarray, and print the resulting value to a string (e.g. for sending to a server command). The user should not call this function directly.

Parameters:

other (object) – The scalar to be cast to the pdarray.dtype

Return type:

string representation of np.dtype corresponding to the other parameter

Raises:

TypeError – Raised if the other parameter cannot be converted to Numpy dtype

info() str[source]

Returns a JSON formatted string containing information about all components of self

Parameters:

None

Returns:

JSON string containing information about all components of self

Return type:

str

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry

Parameters:

None

Returns:

Indicates if the object is contained in the registry

Return type:

bool

Raises:

RuntimeError – Raised if there’s a server-side error thrown

Note

This will return True if the object is registered itself or as a component of another object

is_sorted() numpy.bool_[source]

Return True iff the array is monotonically non-decreasing.

Parameters:

None

Returns:

Indicates if the array is monotonically non-decreasing

Return type:

bool

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

max() arkouda.dtypes.numpy_scalars[source]

Return the maximum value of the array.

maxk(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the maximum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

The maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

mean() numpy.float64[source]

Return the mean of the array.

min() arkouda.dtypes.numpy_scalars[source]

Return the minimum value of the array.

mink(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the minimum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

The maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

opeq(other, op)[source]
parity() pdarray[source]

Find the parity (XOR of all bits) in each element. See ak.parity.

popcount() pdarray[source]

Find the population (number of bits set) in each element. See ak.popcount.

pretty_print_info() None[source]

Prints information about all components of self in a human readable format

Parameters:

None

Return type:

None

prod() numpy.float64[source]

Return the product of all elements in the array. Return value is always a np.float64 or np.int64.

register(user_defined_name: str) pdarray[source]

Register this pdarray with a user defined name in the arkouda server so it can be attached to later using pdarray.attach() This is an in-place operation, registering a pdarray more than once will update the name in the registry and remove the previously registered name. A name can only be registered to one pdarray at a time.

Parameters:

user_defined_name (str) – user defined name array is to be registered under

Returns:

The same pdarray which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different pdarrays with the same name.

Return type:

pdarray

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the pdarray with the user_defined_name If the user is attempting to register more than one pdarray with the same name, the former should be unregistered first to free up the registration name.

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
reshape(*shape, order='row_major')[source]

Gives a new shape to an array without changing its data.

Parameters:
  • shape (int, tuple of ints, or pdarray) – The new shape should be compatible with the original shape.

  • order (str {'row_major' | 'C' | 'column_major' | 'F'}) – Read the elements of the pdarray in this index order By default, read the elements in row_major or C-like order where the last index changes the fastest If ‘column_major’ or ‘F’, read the elements in column_major or Fortran-like order where the first index changes the fastest

Returns:

An arrayview object with the data from the array but with the new shape

Return type:

ArrayView

rotl(other) pdarray[source]

Rotate bits left by <other>.

rotr(other) pdarray[source]

Rotate bits right by <other>.

save(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', compression: str | None = None, file_format: str = 'HDF5', file_type: str = 'distribute') str[source]

DEPRECATED Save the pdarray to HDF5 or Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. HDF5 support single files, in which case the file name will only be that provided. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:
  • compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

  • file_format (str {'HDF5', 'Parquet'}) – By default, saved files will be written to the HDF5 file format. If ‘Parquet’, the files will be written to the Parquet file format. This is case insensitive.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:
  • RuntimeError – Raised if a server-side error is thrown saving the pdarray

  • ValueError – Raised if there is an error in parsing the prefix path pointing to file write location or if the mode parameter is neither truncate nor append

  • TypeError – Raised if any one of the prefix_path, dataset, or mode parameters is not a string

Notes

The prefix_path must be visible to the arkouda server and the user must have write permission. Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales. If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. Previously all files saved in Parquet format were saved with a .parquet file extension. This will require you to use load as if you saved the file with the extension. Try this if an older file is not being found. Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.save('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.save('path/prefix.h5', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.h5`` where #### is replaced by each locale number
>>> # Saving with an extension (Parquet)
>>> a.save('path/prefix.parquet', dataset='array', file_format='Parquet')
Saves the array in numLocales Parquet files with the name
``cwd/path/name_prefix_LOCALE####.parquet`` where #### is replaced by each locale number
slice_bits(low, high) pdarray[source]

Returns a pdarray containing only bits from low to high of self.

This is zero indexed and inclusive on both ends, so slicing the bottom 64 bits is pda.slice_bits(0, 63)

Parameters:
  • low (int) – The lowest bit included in the slice (inclusive) zero indexed, so the first bit is 0

  • high (int) – The highest bit included in the slice (inclusive)

Returns:

A new pdarray containing the bits of self from low to high

Return type:

pdarray

Raises:

RuntimeError – Raised if there is a server-side error thrown

Examples

>>> p = ak.array([2**65 + (2**64 - 1)])
>>> bin(p[0])
'0b101111111111111111111111111111111111111111111111111111111111111111'
>>> bin(p.slice_bits(64, 65)[0])
'0b10'
std(ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Compute the standard deviation. See arkouda.std for details.

Parameters:

ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

The scalar standard deviation of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

sum() arkouda.dtypes.numeric_and_bool_scalars[source]

Return the sum of all elements in the array.

to_csv(prefix_path: str, dataset: str = 'array', col_delim: str = ',', overwrite: bool = False)[source]

Write pdarray to CSV file(s). File will contain a single column with the pdarray data. All CSV Files written by Arkouda include a header denoting data types of the columns.

prefix_path: str

The filename prefix to be used for saving files. Files will have _LOCALE#### appended when they are written to disk.

dataset: str

Column name to save the pdarray under. Defaults to “array”.

col_delim: str

Defaults to “,”. Value to be used to separate columns within the file. Please be sure that the value used DOES NOT appear in your dataset.

overwrite: bool

Defaults to False. If True, any existing files matching your provided prefix_path will be overwritten. If False, an error will be returned if existing files are found.

str reponse message

ValueError

Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist

RuntimeError

Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

TypeError

Raised if we receive an unknown arkouda_type returned from the server

  • CSV format is not currently supported by load/load_all operations

  • The column delimiter is expected to be the same for column names and data

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (`

`) at this time.

to_cuda()[source]

Convert the array to a Numba DeviceND array, transferring array data from the arkouda server to Python via ndarray. If the array exceeds a builtin size limit, a RuntimeError is raised.

Returns:

A Numba ndarray with the same attributes and data as the pdarray; on GPU

Return type:

numba.DeviceNDArray

Raises:
  • ImportError – Raised if CUDA is not available

  • ModuleNotFoundError – Raised if Numba is either not installed or not enabled

  • RuntimeError – Raised if there is a server-side error thrown in the course of retrieving the pdarray.

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_cuda()
array([0, 1, 2, 3, 4])
>>> type(a.to_cuda())
numpy.devicendarray
to_hdf(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', file_type: str = 'distribute') str[source]

Save the pdarray to HDF5. The object can be saved to a collection of files or single file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. Otherwise, the file name will be prefix_path. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.to_hdf('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.to_hdf('path/prefix.h5', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.h5`` where #### is replaced by each locale number
>>> # Saving to a single file
>>> a.to_hdf('path/prefix.hdf5', dataset='array', file_type='single')
Saves the array in to single hdf5 file on the root node.
``cwd/path/name_prefix.hdf5``
to_list() List[source]

Convert the array to a list, transferring array data from the Arkouda server to client-side Python. Note: if the pdarray size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A list with the same data as the pdarray

Return type:

list

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the pdarray size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

to_ndarray

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_list()
[0, 1, 2, 3, 4]
>>> type(a.to_list())
list
to_ndarray() numpy.ndarray[source]

Convert the array to a np.ndarray, transferring array data from the Arkouda server to client-side Python. Note: if the pdarray size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A numpy ndarray with the same attributes and data as the pdarray

Return type:

np.ndarray

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the pdarray size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array, to_list

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_ndarray()
array([0, 1, 2, 3, 4])
>>> type(a.to_ndarray())
numpy.ndarray
to_parquet(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', compression: str | None = None) str[source]

Save the pdarray to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - ‘append’ write mode is supported, but is not efficient. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.to_parquet('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.to_parqet('path/prefix.parquet', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.parquet`` where #### is replaced by each locale number
transfer(hostname: str, port: arkouda.dtypes.int_scalars)[source]

Sends a pdarray to a different Arkouda server

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the pdarray is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Return type:

A message indicating a complete transfer

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unregister() None[source]

Unregister a pdarray in the arkouda server which was previously registered using register() and/or attahced to using attach()

Return type:

None

Raises:

RuntimeError – Raised if the server could not find the internal name/symbol to remove

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
update_hdf(prefix_path: str, dataset: str = 'array', repack: bool = True)[source]

Overwrite the dataset with the name provided with this pdarray. If the dataset does not exist it is added

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

str - success message if successful

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

value_counts()[source]

Count the occurrences of the unique values of self.

Returns:

  • unique_values (pdarray) – The unique values, sorted in ascending order

  • counts (pdarray, int64) – The number of times the corresponding unique value occurs

Examples

>>> ak.array([2, 0, 2, 4, 0, 0]).value_counts()
(array([0, 2, 4]), array([3, 2, 1]))
var(ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Compute the variance. See arkouda.var for details.

Parameters:

ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating var

Returns:

The scalar variance of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • ValueError – Raised if the ddof >= pdarray size

  • RuntimeError – Raised if there’s a server-side error thrown

class arkouda.pdarray(name: str, mydtype: numpy.dtype | str, size: arkouda.dtypes.int_scalars, ndim: arkouda.dtypes.int_scalars, shape: Sequence[int], itemsize: arkouda.dtypes.int_scalars, max_bits: int | None = None)[source]

The basic arkouda array class. This class contains only the attributies of the array; the data resides on the arkouda server. When a server operation results in a new array, arkouda will create a pdarray instance that points to the array data on the server. As such, the user should not initialize pdarray instances directly.

name

The server-side identifier for the array

Type:

str

dtype

The element type of the array

Type:

dtype

size

The number of elements in the array

Type:

int_scalars

ndim

The rank of the array (currently only rank 1 arrays supported)

Type:

int_scalars

shape

A list or tuple containing the sizes of each dimension of the array

Type:

Sequence[int]

itemsize

The size in bytes of each element

Type:

int_scalars

property max_bits
property nbytes

The size of the pdarray in bytes.

Returns:

The size of the pdarray in bytes.

Return type:

int

BinOps
OpEqOps
objType = 'pdarray'
all() numpy.bool_[source]

Return True iff all elements of the array evaluate to True.

any() numpy.bool_[source]

Return True iff any element of the array evaluates to True.

argmax() numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array max value.

argmaxk(k: arkouda.dtypes.int_scalars) pdarray[source]

Finds the indices corresponding to the maximum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

Indices corresponding to the maximum k values, sorted

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

argmin() numpy.int64 | numpy.uint64[source]

Return the index of the first occurrence of the array min value

argmink(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the minimum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

Indices corresponding to the maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

astype(dtype) pdarray[source]

Cast values of pdarray to provided dtype

Parameters:

dtype (np.dtype or str) – Dtype to cast to

Returns:

An arkouda pdarray with values converted to the specified data type

Return type:

ak.pdarray

Notes

This is essentially shorthand for ak.cast(x, ‘<dtype>’) where x is a pdarray.

static attach(user_defined_name: str) pdarray[source]

class method to return a pdarray attached to the registered name in the arkouda server which was registered using register()

Parameters:

user_defined_name (str) – user defined name which array was registered under

Returns:

pdarray which is bound to the corresponding server side component which was registered with user_defined_name

Return type:

pdarray

Raises:

TypeError – Raised if user_defined_name is not a str

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
bigint_to_uint_arrays() List[pdarray][source]

Creates a list of uint pdarrays from a bigint pdarray. The first item in return will be the highest 64 bits of the bigint pdarray and the last item will be the lowest 64 bits.

Returns:

A list of uint pdarrays where: The first item in return will be the highest 64 bits of the bigint pdarray and the last item will be the lowest 64 bits.

Return type:

List[pdarrays]

Raises:

RuntimeError – Raised if there is a server-side error thrown

Examples

>>> a = ak.arange(2**64, 2**64 + 5)
>>> a
array(["18446744073709551616" "18446744073709551617" "18446744073709551618"
"18446744073709551619" "18446744073709551620"])
>>> a.bigint_to_uint_arrays()
[array([1 1 1 1 1]), array([0 1 2 3 4])]
clz() pdarray[source]

Count the number of leading zeros in each element. See ak.clz.

corr(y: pdarray) numpy.float64[source]

Compute the correlation between self and y using pearson correlation coefficient.

Parameters:

y (pdarray) – Other pdarray used to calculate correlation

Returns:

The scalar correlation of the two arrays

Return type:

np.float64

Raises:
  • TypeError – Raised if y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

cov(y: pdarray) numpy.float64[source]

Compute the covariance between self and y.

Parameters:

y (pdarray) – Other pdarray used to calculate covariance

Returns:

The scalar covariance of the two arrays

Return type:

np.float64

Raises:
  • TypeError – Raised if y is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

ctz() pdarray[source]

Count the number of trailing zeros in each element. See ak.ctz.

fill(value: arkouda.dtypes.numeric_scalars) None[source]

Fill the array (in place) with a constant value.

Parameters:

value (numeric_scalars)

Raises:

TypeError – Raised if value is not an int, int64, float, or float64

format_other(other) str[source]

Attempt to cast scalar other to the element dtype of this pdarray, and print the resulting value to a string (e.g. for sending to a server command). The user should not call this function directly.

Parameters:

other (object) – The scalar to be cast to the pdarray.dtype

Return type:

string representation of np.dtype corresponding to the other parameter

Raises:

TypeError – Raised if the other parameter cannot be converted to Numpy dtype

info() str[source]

Returns a JSON formatted string containing information about all components of self

Parameters:

None

Returns:

JSON string containing information about all components of self

Return type:

str

is_registered() numpy.bool_[source]

Return True iff the object is contained in the registry

Parameters:

None

Returns:

Indicates if the object is contained in the registry

Return type:

bool

Raises:

RuntimeError – Raised if there’s a server-side error thrown

Note

This will return True if the object is registered itself or as a component of another object

is_sorted() numpy.bool_[source]

Return True iff the array is monotonically non-decreasing.

Parameters:

None

Returns:

Indicates if the array is monotonically non-decreasing

Return type:

bool

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

max() arkouda.dtypes.numpy_scalars[source]

Return the maximum value of the array.

maxk(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the maximum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

The maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

mean() numpy.float64[source]

Return the mean of the array.

min() arkouda.dtypes.numpy_scalars[source]

Return the minimum value of the array.

mink(k: arkouda.dtypes.int_scalars) pdarray[source]

Compute the minimum “k” values.

Parameters:

k (int_scalars) – The desired count of maximum values to be returned by the output.

Returns:

The maximum k values from pda

Return type:

pdarray, int

Raises:

TypeError – Raised if pda is not a pdarray

opeq(other, op)[source]
parity() pdarray[source]

Find the parity (XOR of all bits) in each element. See ak.parity.

popcount() pdarray[source]

Find the population (number of bits set) in each element. See ak.popcount.

pretty_print_info() None[source]

Prints information about all components of self in a human readable format

Parameters:

None

Return type:

None

prod() numpy.float64[source]

Return the product of all elements in the array. Return value is always a np.float64 or np.int64.

register(user_defined_name: str) pdarray[source]

Register this pdarray with a user defined name in the arkouda server so it can be attached to later using pdarray.attach() This is an in-place operation, registering a pdarray more than once will update the name in the registry and remove the previously registered name. A name can only be registered to one pdarray at a time.

Parameters:

user_defined_name (str) – user defined name array is to be registered under

Returns:

The same pdarray which is now registered with the arkouda server and has an updated name. This is an in-place modification, the original is returned to support a fluid programming style. Please note you cannot register two different pdarrays with the same name.

Return type:

pdarray

Raises:
  • TypeError – Raised if user_defined_name is not a str

  • RegistrationError – If the server was unable to register the pdarray with the user_defined_name If the user is attempting to register more than one pdarray with the same name, the former should be unregistered first to free up the registration name.

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
reshape(*shape, order='row_major')[source]

Gives a new shape to an array without changing its data.

Parameters:
  • shape (int, tuple of ints, or pdarray) – The new shape should be compatible with the original shape.

  • order (str {'row_major' | 'C' | 'column_major' | 'F'}) – Read the elements of the pdarray in this index order By default, read the elements in row_major or C-like order where the last index changes the fastest If ‘column_major’ or ‘F’, read the elements in column_major or Fortran-like order where the first index changes the fastest

Returns:

An arrayview object with the data from the array but with the new shape

Return type:

ArrayView

rotl(other) pdarray[source]

Rotate bits left by <other>.

rotr(other) pdarray[source]

Rotate bits right by <other>.

save(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', compression: str | None = None, file_format: str = 'HDF5', file_type: str = 'distribute') str[source]

DEPRECATED Save the pdarray to HDF5 or Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. HDF5 support single files, in which case the file name will only be that provided. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:
  • compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

  • file_format (str {'HDF5', 'Parquet'}) – By default, saved files will be written to the HDF5 file format. If ‘Parquet’, the files will be written to the Parquet file format. This is case insensitive.

  • file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:
  • RuntimeError – Raised if a server-side error is thrown saving the pdarray

  • ValueError – Raised if there is an error in parsing the prefix path pointing to file write location or if the mode parameter is neither truncate nor append

  • TypeError – Raised if any one of the prefix_path, dataset, or mode parameters is not a string

Notes

The prefix_path must be visible to the arkouda server and the user must have write permission. Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales. If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. Previously all files saved in Parquet format were saved with a .parquet file extension. This will require you to use load as if you saved the file with the extension. Try this if an older file is not being found. Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.save('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.save('path/prefix.h5', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.h5`` where #### is replaced by each locale number
>>> # Saving with an extension (Parquet)
>>> a.save('path/prefix.parquet', dataset='array', file_format='Parquet')
Saves the array in numLocales Parquet files with the name
``cwd/path/name_prefix_LOCALE####.parquet`` where #### is replaced by each locale number
slice_bits(low, high) pdarray[source]

Returns a pdarray containing only bits from low to high of self.

This is zero indexed and inclusive on both ends, so slicing the bottom 64 bits is pda.slice_bits(0, 63)

Parameters:
  • low (int) – The lowest bit included in the slice (inclusive) zero indexed, so the first bit is 0

  • high (int) – The highest bit included in the slice (inclusive)

Returns:

A new pdarray containing the bits of self from low to high

Return type:

pdarray

Raises:

RuntimeError – Raised if there is a server-side error thrown

Examples

>>> p = ak.array([2**65 + (2**64 - 1)])
>>> bin(p[0])
'0b101111111111111111111111111111111111111111111111111111111111111111'
>>> bin(p.slice_bits(64, 65)[0])
'0b10'
std(ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Compute the standard deviation. See arkouda.std for details.

Parameters:

ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

The scalar standard deviation of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

sum() arkouda.dtypes.numeric_and_bool_scalars[source]

Return the sum of all elements in the array.

to_csv(prefix_path: str, dataset: str = 'array', col_delim: str = ',', overwrite: bool = False)[source]

Write pdarray to CSV file(s). File will contain a single column with the pdarray data. All CSV Files written by Arkouda include a header denoting data types of the columns.

prefix_path: str

The filename prefix to be used for saving files. Files will have _LOCALE#### appended when they are written to disk.

dataset: str

Column name to save the pdarray under. Defaults to “array”.

col_delim: str

Defaults to “,”. Value to be used to separate columns within the file. Please be sure that the value used DOES NOT appear in your dataset.

overwrite: bool

Defaults to False. If True, any existing files matching your provided prefix_path will be overwritten. If False, an error will be returned if existing files are found.

str reponse message

ValueError

Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist

RuntimeError

Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

TypeError

Raised if we receive an unknown arkouda_type returned from the server

  • CSV format is not currently supported by load/load_all operations

  • The column delimiter is expected to be the same for column names and data

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (`

`) at this time.

to_cuda()[source]

Convert the array to a Numba DeviceND array, transferring array data from the arkouda server to Python via ndarray. If the array exceeds a builtin size limit, a RuntimeError is raised.

Returns:

A Numba ndarray with the same attributes and data as the pdarray; on GPU

Return type:

numba.DeviceNDArray

Raises:
  • ImportError – Raised if CUDA is not available

  • ModuleNotFoundError – Raised if Numba is either not installed or not enabled

  • RuntimeError – Raised if there is a server-side error thrown in the course of retrieving the pdarray.

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_cuda()
array([0, 1, 2, 3, 4])
>>> type(a.to_cuda())
numpy.devicendarray
to_hdf(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', file_type: str = 'distribute') str[source]

Save the pdarray to HDF5. The object can be saved to a collection of files or single file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

file_type (str ("single" | "distribute")) – Default: “distribute” When set to single, dataset is written to a single file. When distribute, dataset is written on a file per locale. This is only supported by HDF5 files and will have no impact of Parquet Files.

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. Otherwise, the file name will be prefix_path. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.to_hdf('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.to_hdf('path/prefix.h5', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.h5`` where #### is replaced by each locale number
>>> # Saving to a single file
>>> a.to_hdf('path/prefix.hdf5', dataset='array', file_type='single')
Saves the array in to single hdf5 file on the root node.
``cwd/path/name_prefix.hdf5``
to_list() List[source]

Convert the array to a list, transferring array data from the Arkouda server to client-side Python. Note: if the pdarray size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A list with the same data as the pdarray

Return type:

list

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the pdarray size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

to_ndarray

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_list()
[0, 1, 2, 3, 4]
>>> type(a.to_list())
list
to_ndarray() numpy.ndarray[source]

Convert the array to a np.ndarray, transferring array data from the Arkouda server to client-side Python. Note: if the pdarray size exceeds client.maxTransferBytes, a RuntimeError is raised.

Returns:

A numpy ndarray with the same attributes and data as the pdarray

Return type:

np.ndarray

Raises:

RuntimeError – Raised if there is a server-side error thrown, if the pdarray size exceeds the built-in client.maxTransferBytes size limit, or if the bytes received does not match expected number of bytes

Notes

The number of bytes in the array cannot exceed client.maxTransferBytes, otherwise a RuntimeError will be raised. This is to protect the user from overflowing the memory of the system on which the Python client is running, under the assumption that the server is running on a distributed system with much more memory than the client. The user may override this limit by setting client.maxTransferBytes to a larger value, but proceed with caution.

See also

array, to_list

Examples

>>> a = ak.arange(0, 5, 1)
>>> a.to_ndarray()
array([0, 1, 2, 3, 4])
>>> type(a.to_ndarray())
numpy.ndarray
to_parquet(prefix_path: str, dataset: str = 'array', mode: str = 'truncate', compression: str | None = None) str[source]

Save the pdarray to Parquet. The result is a collection of files, one file per locale of the arkouda server, where each filename starts with prefix_path. Each locale saves its chunk of the array to its corresponding file. :param prefix_path: Directory and filename prefix that all output files share :type prefix_path: str :param dataset: Name of the dataset to create in files (must not already exist) :type dataset: str :param mode: By default, truncate (overwrite) output files, if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:

compression (str (Optional)) – (None | “snappy” | “gzip” | “brotli” | “zstd” | “lz4”) Sets the compression type used with Parquet files

Return type:

string message indicating result of save operation

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • The prefix_path must be visible to the arkouda server and the user must

have write permission. - Output files have names of the form <prefix_path>_LOCALE<i>, where <i> ranges from 0 to numLocales for file_type=’distribute’. - ‘append’ write mode is supported, but is not efficient. - If any of the output files already exist and the mode is ‘truncate’, they will be overwritten. If the mode is ‘append’ and the number of output files is less than the number of locales or a dataset with the same name already exists, a RuntimeError will result. - Any file extension can be used.The file I/O does not rely on the extension to determine the file format.

Examples

>>> a = ak.arange(25)
>>> # Saving without an extension
>>> a.to_parquet('path/prefix', dataset='array')
Saves the array to numLocales HDF5 files with the name ``cwd/path/name_prefix_LOCALE####``
>>> # Saving with an extension (HDF5)
>>> a.to_parqet('path/prefix.parquet', dataset='array')
Saves the array to numLocales HDF5 files with the name
``cwd/path/name_prefix_LOCALE####.parquet`` where #### is replaced by each locale number
transfer(hostname: str, port: arkouda.dtypes.int_scalars)[source]

Sends a pdarray to a different Arkouda server

Parameters:
  • hostname (str) – The hostname where the Arkouda server intended to receive the pdarray is running.

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to ak.receive_array().

Return type:

A message indicating a complete transfer

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

unregister() None[source]

Unregister a pdarray in the arkouda server which was previously registered using register() and/or attahced to using attach()

Return type:

None

Raises:

RuntimeError – Raised if the server could not find the internal name/symbol to remove

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.pdarray.attach("my_zeros")
>>> # ...other work...
>>> b.unregister()
update_hdf(prefix_path: str, dataset: str = 'array', repack: bool = True)[source]

Overwrite the dataset with the name provided with this pdarray. If the dataset does not exist it is added

Parameters:
  • prefix_path (str) – Directory and filename prefix that all output files share

  • dataset (str) – Name of the dataset to create in files

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Return type:

str - success message if successful

Raises:

RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the dataset provided does not exist, it will be added

value_counts()[source]

Count the occurrences of the unique values of self.

Returns:

  • unique_values (pdarray) – The unique values, sorted in ascending order

  • counts (pdarray, int64) – The number of times the corresponding unique value occurs

Examples

>>> ak.array([2, 0, 2, 4, 0, 0]).value_counts()
(array([0, 2, 4]), array([3, 2, 1]))
var(ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Compute the variance. See arkouda.var for details.

Parameters:

ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating var

Returns:

The scalar variance of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • ValueError – Raised if the ddof >= pdarray size

  • RuntimeError – Raised if there’s a server-side error thrown

arkouda.plot_dist(b, h, log=True, xlabel=None, newfig=True)[source]

Plot the distribution and cumulative distribution of histogram Data

Parameters:
  • b (np.ndarray) – Bin edges

  • h (np.ndarray) – Histogram data

  • log (bool) – use log to scale y

  • xlabel (str) – Label for the x axis of the graph

  • newfig (bool) – Generate a new figure or not

Notes

This function does not return or display the plot. A user must have matplotlib imported in addition to arkouda to display plots. This could be updated to return the object or have a flag to show the resulting plots. See Examples Below.

Examples

>>> import arkouda as ak
>>> from matplotlib import pyplot as plt
>>> b, h = ak.histogram(ak.arange(10), 3)
>>> ak.plot_dist(b, h.to_ndarray())
>>> # to show the plot
>>> plt.show()
arkouda.popcount(pda: pdarray) pdarray[source]

Find the population (number of bits set) for each integer in an array.

Parameters:

pda (pdarray, int64, uint64, bigint) – Input array (must be integral).

Returns:

population – The number of bits set (1) in each element

Return type:

pdarray

Raises:

TypeError – If input array is not int64, uint64, or bigint

Examples

>>> A = ak.arange(10)
>>> ak.popcount(A)
array([0, 1, 1, 2, 1, 2, 2, 3, 1, 2])
arkouda.power(pda: pdarray, pwr: int | float | pdarray, where: bool | pdarray = True) pdarray[source]

Raises an array to a power. If where is given, the operation will only take place in the positions where the where condition is True.

Note: Our implementation of the where argument deviates from numpy. The difference in behavior occurs at positions where the where argument contains a False. In numpy, these position will have uninitialized memory (which can contain anything and will vary between runs). We have chosen to instead return the value of the original array in these positions.

Parameters:
  • pda (pdarray) – A pdarray of values that will be raised to a power (pwr)

  • pwr (integer, float, or pdarray) – The power(s) that pda is raised to

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the corresponding value will be raised to the respective power. Elsewhere, it will retain its original value. Default set to True.

Returns:

  • pdarray

  • Returns a pdarray of values raised to a power, under the boolean where condition.

Examples

>>> a = ak.arange(5)
>>> ak.power(a, 3)
array([0, 1, 8, 27, 64])
>>> ak.power(a), 3, a % 2 == 0)
array([0, 1, 8, 3, 64])
arkouda.power_divergence(f_obs, f_exp=None, ddof=0, lambda_=None)[source]

Computes the power divergence statistic and p-value.

Parameters:
  • f_obs (pdarray) – The observed frequency.

  • f_exp (pdarray, default = None) – The expected frequency.

  • ddof (int) – The delta degrees of freedom.

  • lambda (string, default = "pearson") –

    The power in the Cressie-Read power divergence statistic. Allowed values: “pearson”, “log-likelihood”, “freeman-tukey”, “mod-log-likelihood”, “neyman”, “cressie-read”

    Powers correspond as follows:

    ”pearson”: 1

    ”log-likelihood”: 0

    ”freeman-tukey”: -0.5

    ”mod-log-likelihood”: -1

    ”neyman”: -2

    ”cressie-read”: 2 / 3

Return type:

arkouda.akstats.Power_divergenceResult

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> from arkouda.akstats import power_divergence
>>> x = ak.array([10, 20, 30, 10])
>>> y = ak.array([10, 30, 20, 10])
>>> power_divergence(x, y, lambda_="pearson")
Power_divergenceResult(statistic=8.333333333333334, pvalue=0.03960235520756414)
>>> power_divergence(x, y, lambda_="log-likelihood")
Power_divergenceResult(statistic=8.109302162163285, pvalue=0.04380595350226197)

See also

scipy.stats.power_divergence, arkouda.akstats.chisquare

Notes

This is a modified version of scipy.stats.power_divergence [2] in order to scale using arkouda pdarrays.

References

[1] “scipy.stats.power_divergence”, https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.power_divergence.html

[2] Scipy contributors (2024) scipy (Version v1.12.0) [Source code]. https://github.com/scipy/scipy

arkouda.pretty_print_information(names: List[str] | str = RegisteredSymbols) None[source]

Prints verbose information for each object in names in a human readable format

Parameters:

names (Union[List[str], str]) – names is either the name of an object or list of names of objects to retrieve info if names is ak.AllSymbols, retrieves info for all symbols in the symbol table if names is ak.RegisteredSymbols, retrieves info for all symbols in the registry

Return type:

None

Raises:

RuntimeError – Raised if a server-side error is thrown in the process of retrieving information about the objects in names

arkouda.prod(pda: pdarray) numpy.float64[source]

Return the product of all elements in the array. Return value is always a np.float64 or np.int64

Parameters:

pda (pdarray) – Values for which to calculate the product

Returns:

The product calculated from the pda

Return type:

numpy_scalars

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

arkouda.rad2deg(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Converts angles element-wise from radians to degrees.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the corresponding value will be converted from radians to degrees. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing an angle converted to degrees, from radians, for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.randint(low: arkouda.dtypes.numeric_scalars, high: arkouda.dtypes.numeric_scalars, size: arkouda.dtypes.int_scalars | Tuple[arkouda.dtypes.int_scalars, Ellipsis] = 1, dtype=akint64, seed: arkouda.dtypes.int_scalars | None = None) arkouda.pdarrayclass.pdarray[source]

Generate a pdarray of randomized int, float, or bool values in a specified range bounded by the low and high parameters.

Parameters:
  • low (numeric_scalars) – The low value (inclusive) of the range

  • high (numeric_scalars) – The high value (exclusive for int, inclusive for float) of the range

  • size (int_scalars) – The length of the returned array

  • dtype (Union[int64, float64, bool]) – The dtype of the array

  • seed (int_scalars, optional) – Index for where to pull the first returned value

Returns:

Values drawn uniformly from the specified range having the desired dtype

Return type:

pdarray

Raises:
  • TypeError – Raised if dtype.name not in DTypes, size is not an int, low or high is not an int or float, or seed is not an int

  • ValueError – Raised if size < 0 or if high < low

Notes

Calling randint with dtype=float64 will result in uniform non-integral floating point values.

Ranges >= 2**64 in size is undefined behavior because it exceeds the maximum value that can be stored on the server (uint64)

Examples

>>> ak.randint(0, 10, 5)
array([5, 7, 4, 8, 3])
>>> ak.randint(0, 1, 3, dtype=ak.float64)
array([0.92176432277231968, 0.083130710959903542, 0.68894208386667544])
>>> ak.randint(0, 1, 5, dtype=ak.bool)
array([True, False, True, True, True])
>>> ak.randint(1, 5, 10, seed=2)
array([4, 3, 1, 3, 4, 4, 2, 4, 3, 2])
>>> ak.randint(1, 5, 3, dtype=ak.float64, seed=2)
array([2.9160772326374946, 4.353429832157099, 4.5392023718621486])
>>> ak.randint(1, 5, 10, dtype=ak.bool, seed=2)
array([False, True, True, True, True, False, True, True, True, True])
arkouda.randint(low: arkouda.dtypes.numeric_scalars, high: arkouda.dtypes.numeric_scalars, size: arkouda.dtypes.int_scalars | Tuple[arkouda.dtypes.int_scalars, Ellipsis] = 1, dtype=akint64, seed: arkouda.dtypes.int_scalars | None = None) arkouda.pdarrayclass.pdarray[source]

Generate a pdarray of randomized int, float, or bool values in a specified range bounded by the low and high parameters.

Parameters:
  • low (numeric_scalars) – The low value (inclusive) of the range

  • high (numeric_scalars) – The high value (exclusive for int, inclusive for float) of the range

  • size (int_scalars) – The length of the returned array

  • dtype (Union[int64, float64, bool]) – The dtype of the array

  • seed (int_scalars, optional) – Seed to allow for reproducible random number generation

Returns:

Values drawn uniformly from the specified range having the desired dtype

Return type:

pdarray

Raises:
  • TypeError – Raised if dtype.name not in DTypes, size is not an int, low or high is not an int or float, or seed is not an int

  • ValueError – Raised if size < 0 or if high < low

Notes

Calling randint with dtype=float64 will result in uniform non-integral floating point values.

Ranges >= 2**64 in size is undefined behavior because it exceeds the maximum value that can be stored on the server (uint64)

Examples

>>> ak.randint(0, 10, 5)
array([5, 7, 4, 8, 3])
>>> ak.randint(0, 1, 3, dtype=ak.float64)
array([0.92176432277231968, 0.083130710959903542, 0.68894208386667544])
>>> ak.randint(0, 1, 5, dtype=ak.bool)
array([True, False, True, True, True])
>>> ak.randint(1, 5, 10, seed=2)
array([4, 3, 1, 3, 4, 4, 2, 4, 3, 2])
>>> ak.randint(1, 5, 3, dtype=ak.float64, seed=2)
array([2.9160772326374946, 4.353429832157099, 4.5392023718621486])
>>> ak.randint(1, 5, 10, dtype=ak.bool, seed=2)
array([False, True, True, True, True, False, True, True, True, True])
arkouda.random_strings_lognormal(logmean: arkouda.dtypes.numeric_scalars, logstd: arkouda.dtypes.numeric_scalars, size: arkouda.dtypes.int_scalars, characters: str = 'uppercase', seed: arkouda.dtypes.int_scalars | None = None) arkouda.strings.Strings[source]

Generate random strings with log-normally distributed lengths and with characters drawn from a specified set.

Parameters:
  • logmean (numeric_scalars) – The log-mean of the length distribution

  • logstd (numeric_scalars) – The log-standard-deviation of the length distribution

  • size (int_scalars) – The number of strings to generate

  • characters ((uppercase, lowercase, numeric, printable, binary)) – The set of characters to draw from

  • seed (int_scalars, optional) – Value used to initialize the random number generator

Returns:

The Strings object encapsulating a pdarray of random strings

Return type:

Strings

Raises:
  • TypeError – Raised if logmean is neither a float nor a int, logstd is not a float, size is not an int, or if characters is not a str

  • ValueError – Raised if logstd <= 0 or size < 0

Notes

The lengths of the generated strings are distributed $Lognormal(mu, sigma^2)$, with \(\mu = logmean\) and \(\sigma = logstd\). Thus, the strings will have an average length of \(exp(\mu + 0.5*\sigma^2)\), a minimum length of zero, and a heavy tail towards longer strings.

Examples

>>> ak.random_strings_lognormal(2, 0.25, 5, seed=1)
array(['TVKJTE', 'ABOCORHFM', 'LUDMMGTB', 'KWOQNPHZ', 'VSXRRL'])
>>> ak.random_strings_lognormal(2, 0.25, 5, seed=1, characters='printable')
array(['+5"fp-', ']3Q4kC~HF', '=F=`,IE!', 'DjkBa'9(', '5oZ1)='])
arkouda.random_strings_uniform(minlen: arkouda.dtypes.int_scalars, maxlen: arkouda.dtypes.int_scalars, size: arkouda.dtypes.int_scalars, characters: str = 'uppercase', seed: None | arkouda.dtypes.int_scalars = None) arkouda.strings.Strings[source]

Generate random strings with lengths uniformly distributed between minlen and maxlen, and with characters drawn from a specified set.

Parameters:
  • minlen (int_scalars) – The minimum allowed length of string

  • maxlen (int_scalars) – The maximum allowed length of string

  • size (int_scalars) – The number of strings to generate

  • characters ((uppercase, lowercase, numeric, printable, binary)) – The set of characters to draw from

  • seed (Union[None, int_scalars], optional) – Value used to initialize the random number generator

Returns:

The array of random strings

Return type:

Strings

Raises:

ValueError – Raised if minlen < 0, maxlen < minlen, or size < 0

Examples

>>> ak.random_strings_uniform(minlen=1, maxlen=5, seed=1, size=5)
array(['TVKJ', 'EWAB', 'CO', 'HFMD', 'U'])
>>> ak.random_strings_uniform(minlen=1, maxlen=5, seed=1, size=5,
... characters='printable')
array(['+5"f', '-P]3', '4k', '~HFF', 'F'])
arkouda.read(filenames: str | List[str], datasets: str | List[str] | None = None, iterative: bool = False, strictTypes: bool = True, allow_errors: bool = False, calc_string_offsets=False, column_delim: str = ',', read_nested: bool = True, has_non_float_nulls: bool = False) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView | arkouda.categorical.Categorical | arkouda.dataframe.DataFrame | arkouda.client_dtypes.IPv4 | arkouda.timeclass.Datetime | arkouda.timeclass.Timedelta | arkouda.index.Index | Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView | arkouda.categorical.Categorical | arkouda.dataframe.DataFrame | arkouda.client_dtypes.IPv4 | arkouda.timeclass.Datetime | arkouda.timeclass.Timedelta | arkouda.index.Index][source]

Read datasets from files. File Type is determined automatically.

Parameters:
  • filenames (list or str) – Either a list of filenames or shell expression

  • datasets (list or str or None) – (List of) name(s) of dataset(s) to read (default: all available)

  • iterative (bool) – Iterative (True) or Single (False) function call(s) to server

  • strictTypes (bool) – If True (default), require all dtypes of a given dataset to have the same precision and sign. If False, allow dtypes of different precision and sign across different files. For example, if one file contains a uint32 dataset and another contains an int64 dataset with the same name, the contents of both will be read into an int64 pdarray.

  • allow_errors (bool) – Default False, if True will allow files with read errors to be skipped instead of failing. A warning will be included in the return containing the total number of files skipped due to failure and up to 10 filenames.

  • calc_string_offsets (bool) – Default False, if True this will tell the server to calculate the offsets/segments array on the server versus loading them from HDF5 files. In the future this option may be set to True as the default.

  • column_delim (str) – Column delimiter to be used if dataset is CSV. Otherwise, unused.

  • read_nested (bool) – Default True, when True, SegArray objects will be read from the file. When False, SegArray (or other nested Parquet columns) will be ignored. Ignored if datasets is not None Parquet Files only.

  • has_non_float_nulls (bool) – Default False. This flag must be set to True to read non-float parquet columns that contain null values.

Returns:

  • For a single dataset returns an Arkouda pdarray, Arkouda Strings, Arkouda Segarrays,

  • or Arkouda ArrayViews. For multiple datasets returns a dictionary of Arkouda pdarrays,

  • Arkouda Strings, Arkouda Segarrays, or Arkouda ArrayViews. – Dictionary of {datasetName: pdarray, String, SegArray, or ArrayView}

Raises:

RuntimeError – If invalid filetype is detected

Notes

If filenames is a string, it is interpreted as a shell expression (a single filename is a valid expression, so it will work) and is expanded with glob to read all matching files.

If iterative == True each dataset name and file names are passed to the server as independent sequential strings while if iterative == False all dataset names and file names are passed to the server in a single string.

If datasets is None, infer the names of datasets from the first file and read all of them. Use get_datasets to show the names of datasets to HDF5/Parquet files.

CSV files without the Arkouda Header are not supported.

Examples

Read with file Extension >>> x = ak.read(‘path/name_prefix.h5’) # load HDF5 - processing determines file type not extension Read without file Extension >>> x = ak.read(‘path/name_prefix.parquet’) # load Parquet Read Glob Expression >>> x = ak.read(‘path/name_prefix*’) # Reads HDF5

arkouda.read_csv(filenames: str | List[str], datasets: str | List[str] | None = None, column_delim: str = ',', allow_errors: bool = False) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView | arkouda.categorical.Categorical | arkouda.dataframe.DataFrame | arkouda.client_dtypes.IPv4 | arkouda.timeclass.Datetime | arkouda.timeclass.Timedelta | arkouda.index.Index | Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView | arkouda.categorical.Categorical | arkouda.dataframe.DataFrame | arkouda.client_dtypes.IPv4 | arkouda.timeclass.Datetime | arkouda.timeclass.Timedelta | arkouda.index.Index][source]

Read CSV file(s) into Arkouda objects. If more than one dataset is found, the objects will be returned in a dictionary mapping the dataset name to the Arkouda object containing the data. If the file contains the appropriately formatted header, typed data will be returned. Otherwise, all data will be returned as a Strings object.

Parameters:
  • filenames (str or List[str]) – The filenames to read data from

  • datasets (str or List[str] (Optional)) – names of the datasets to read. When None, all datasets will be read.

  • column_delim (str) – The delimiter for column names and data. Defaults to “,”.

  • allow_errors (bool) – Default False, if True will allow files with read errors to be skipped instead of failing. A warning will be included in the return containing the total number of files skipped due to failure and up to 10 filenames.

Returns:

pdarray, Strings or Mapping {dset_name

Return type:

obj} where obj is a pdarray or Strings.

Raises:
  • ValueError – Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist

  • RuntimeError – Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

  • TypeError – Raised if we receive an unknown arkouda_type returned from the server

See also

to_csv

Notes

  • CSV format is not currently supported by load/load_all operations

  • The column delimiter is expected to be the same for column names and data

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (\n) at this time.

  • Unlike other file formats, CSV files store Strings as their UTF-8 format instead of storing bytes as uint(8).

arkouda.read_hdf(filenames: str | List[str], datasets: str | List[str] | None = None, iterative: bool = False, strict_types: bool = True, allow_errors: bool = False, calc_string_offsets: bool = False, tag_data=False) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView | arkouda.categorical.Categorical | arkouda.dataframe.DataFrame | arkouda.client_dtypes.IPv4 | arkouda.timeclass.Datetime | arkouda.timeclass.Timedelta | arkouda.index.Index | Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView | arkouda.categorical.Categorical | arkouda.dataframe.DataFrame | arkouda.client_dtypes.IPv4 | arkouda.timeclass.Datetime | arkouda.timeclass.Timedelta | arkouda.index.Index][source]

Read Arkouda objects from HDF5 file/s

Parameters:
  • filenames (str, List[str]) – Filename/s to read objects from

  • datasets (Optional str, List[str]) – datasets to read from the provided files

  • iterative (bool) – Iterative (True) or Single (False) function call(s) to server

  • strict_types (bool) – If True (default), require all dtypes of a given dataset to have the same precision and sign. If False, allow dtypes of different precision and sign across different files. For example, if one file contains a uint32 dataset and another contains an int64 dataset with the same name, the contents of both will be read into an int64 pdarray.

  • allow_errors (bool) – Default False, if True will allow files with read errors to be skipped instead of failing. A warning will be included in the return containing the total number of files skipped due to failure and up to 10 filenames.

  • calc_string_offsets (bool) – Default False, if True this will tell the server to calculate the offsets/segments array on the server versus loading them from HDF5 files. In the future this option may be set to True as the default.

  • tagData (bool) – Default False, if True tag the data with the code associated with the filename that the data was pulled from.

Returns:

  • For a single dataset returns an Arkouda pdarray, Arkouda Strings, Arkouda Segarrays,

  • or Arkouda ArrayViews. For multiple datasets returns a dictionary of Arkouda pdarrays,

  • Arkouda Strings, Arkouda Segarrays, or Arkouda ArrayViews. – Dictionary of {datasetName: pdarray, String, SegArray, or ArrayView}

Raises:
  • ValueError – Raised if all datasets are not present in all hdf5 files or if one or more of the specified files do not exist

  • RuntimeError – Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

  • TypeError – Raised if we receive an unknown arkouda_type returned from the server

Notes

If filenames is a string, it is interpreted as a shell expression (a single filename is a valid expression, so it will work) and is expanded with glob to read all matching files.

If iterative == True each dataset name and file names are passed to the server as independent sequential strings while if iterative == False all dataset names and file names are passed to the server in a single string.

If datasets is None, infer the names of datasets from the first file and read all of them. Use get_datasets to show the names of datasets to HDF5 files.

See also

read_tagged_data

Examples

>>>
# Read with file Extension
>>> x = ak.read_hdf('path/name_prefix.h5') # load HDF5
# Read Glob Expression
>>> x = ak.read_hdf('path/name_prefix*') # Reads HDF5
arkouda.read_parquet(filenames: str | List[str], datasets: str | List[str] | None = None, iterative: bool = False, strict_types: bool = True, allow_errors: bool = False, tag_data: bool = False, read_nested: bool = True, has_non_float_nulls: bool = False) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView | arkouda.categorical.Categorical | arkouda.dataframe.DataFrame | arkouda.client_dtypes.IPv4 | arkouda.timeclass.Datetime | arkouda.timeclass.Timedelta | arkouda.index.Index | Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView | arkouda.categorical.Categorical | arkouda.dataframe.DataFrame | arkouda.client_dtypes.IPv4 | arkouda.timeclass.Datetime | arkouda.timeclass.Timedelta | arkouda.index.Index][source]

Read Arkouda objects from Parquet file/s

Parameters:
  • filenames (str, List[str]) – Filename/s to read objects from

  • datasets (Optional str, List[str]) – datasets to read from the provided files

  • iterative (bool) – Iterative (True) or Single (False) function call(s) to server

  • strict_types (bool) – If True (default), require all dtypes of a given dataset to have the same precision and sign. If False, allow dtypes of different precision and sign across different files. For example, if one file contains a uint32 dataset and another contains an int64 dataset with the same name, the contents of both will be read into an int64 pdarray.

  • allow_errors (bool) – Default False, if True will allow files with read errors to be skipped instead of failing. A warning will be included in the return containing the total number of files skipped due to failure and up to 10 filenames.

  • tagData (bool) – Default False, if True tag the data with the code associated with the filename that the data was pulled from.

  • read_nested (bool) – Default True, when True, SegArray objects will be read from the file. When False, SegArray (or other nested Parquet columns) will be ignored. If datasets is not None, this will be ignored.

  • has_non_float_nulls (bool) – Default False. This flag must be set to True to read non-float parquet columns that contain null values.

Returns:

  • For a single dataset returns an Arkouda pdarray, Arkouda Strings, or Arkouda ArrayView object

  • and for multiple datasets returns a dictionary of Arkouda pdarrays,

  • Arkouda Strings or Arkouda ArrayView. – Dictionary of {datasetName: pdarray or String}

Raises:
  • ValueError – Raised if all datasets are not present in all parquet files or if one or more of the specified files do not exist

  • RuntimeError – Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

  • TypeError – Raised if we receive an unknown arkouda_type returned from the server

Notes

If filenames is a string, it is interpreted as a shell expression (a single filename is a valid expression, so it will work) and is expanded with glob to read all matching files.

If iterative == True each dataset name and file names are passed to the server as independent sequential strings while if iterative == False all dataset names and file names are passed to the server in a single string.

If datasets is None, infer the names of datasets from the first file and read all of them. Use get_datasets to show the names of datasets to Parquet files.

Parquet always recomputes offsets at this time This will need to be updated once parquets workflow is updated

See also

read_tagged_data

Examples

Read without file Extension >>> x = ak.read_parquet(‘path/name_prefix.parquet’) # load Parquet Read Glob Expression >>> x = ak.read_parquet(‘path/name_prefix*’) # Reads Parquet

arkouda.read_tagged_data(filenames: str | List[str], datasets: str | List[str] | None = None, strictTypes: bool = True, allow_errors: bool = False, calc_string_offsets=False, read_nested: bool = True, has_non_float_nulls: bool = False)[source]

Read datasets from files and tag each record to the file it was read from. File Type is determined automatically.

Parameters:
  • filenames (list or str) – Either a list of filenames or shell expression

  • datasets (list or str or None) – (List of) name(s) of dataset(s) to read (default: all available)

  • strictTypes (bool) – If True (default), require all dtypes of a given dataset to have the same precision and sign. If False, allow dtypes of different precision and sign across different files. For example, if one file contains a uint32 dataset and another contains an int64 dataset with the same name, the contents of both will be read into an int64 pdarray.

  • allow_errors (bool) – Default False, if True will allow files with read errors to be skipped instead of failing. A warning will be included in the return containing the total number of files skipped due to failure and up to 10 filenames.

  • calc_string_offsets (bool) – Default False, if True this will tell the server to calculate the offsets/segments array on the server versus loading them from HDF5 files. In the future this option may be set to True as the default.

  • read_nested (bool) – Default True, when True, SegArray objects will be read from the file. When False, SegArray (or other nested Parquet columns) will be ignored. Ignored if datasets is not None Parquet Files only.

  • has_non_float_nulls (bool) – Default False. This flag must be set to True to read non-float parquet columns that contain null values.

Notes

Not currently supported for Categorical or GroupBy datasets

Examples

Read files and return data with tagging corresponding to the Categorical returned cat.codes will link the codes in data to the filename. Data will contain the code Filename_Codes >>> data, cat = ak.read_tagged_data(‘path/name’) >>> data {‘Filname_Codes’: array([0 3 6 9 12]), ‘col_name’: array([0 0 0 1])}

arkouda.receive(hostname: str, port)[source]

Receive a pdarray sent by pdarray.transfer().

Parameters:
  • hostname (str) – The hostname of the pdarray that sent the array

  • port (int_scalars) – The port to send the array over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to pdarray.transfer().

Returns:

The pdarray sent from the sending server to the current receiving server.

Return type:

pdarray

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

arkouda.receive_dataframe(hostname: str, port)[source]

Receive a pdarray sent by dataframe.transfer().

Parameters:
  • hostname (str) – The hostname of the dataframe that sent the array

  • port (int_scalars) – The port to send the dataframe over. This needs to be an open port (i.e., not one that the Arkouda server is running on). This will open up numLocales ports, each of which in succession, so will use ports of the range {port..(port+numLocales)} (e.g., running an Arkouda server of 4 nodes, port 1234 is passed as port, Arkouda will use ports 1234, 1235, 1236, and 1237 to send the array data). This port much match the port passed to the call to pdarray.send_array().

Returns:

The dataframe sent from the sending server to the current receiving server.

Return type:

pdarray

Raises:
  • ValueError – Raised if the op is not within the pdarray.BinOps set

  • TypeError – Raised if other is not a pdarray or the pdarray.dtype is not a supported dtype

arkouda.register_all(data: dict)[source]

Register all objects in the provided dictionary

Parameters:

data (dict) – Maps name to register the object to the object. For example, {“MyArray”: ak.array([0, 1, 2])

Return type:

None

arkouda.resolve_scalar_dtype(val: object) str[source]

Try to infer what dtype arkouda_server should treat val as.

arkouda.restore(filename)[source]

Return data saved using ak.snapshot

Parameters:
  • filename (str)

  • read (Name used to create snapshot to be)

Return type:

Dict

Notes

Unlike other save/load methods using snapshot restore will save DataFrames alongside other objects in HDF5. Thus, they are returned within the dictionary as a dataframe.

arkouda.right_align(left, right)[source]

Map two arrays of sparse values to the 0-up index set implied by the right array, discarding values from left that do not appear in right.

Parameters:
  • left (pdarray or a sequence of pdarrays) – Left-hand identifiers

  • right (pdarray or a sequence of pdarrays) – Right-hand identifiers that define the index

Returns:

  • keep (pdarray, bool) – Logical index of left-hand values that survived

  • aligned ((pdarray, pdarray)) – Left and right arrays with values replaced by 0-up indices

arkouda.rotl(x, rot) pdarray[source]

Rotate bits of <x> to the left by <rot>.

Parameters:
  • x (pdarray(int64/uint64) or integer) – Value(s) to rotate left.

  • rot (pdarray(int64/uint64) or integer) – Amount(s) to rotate by.

Returns:

rotated – The rotated elements of x.

Return type:

pdarray(int64/uint64)

Raises:

TypeError – If input array is not int64 or uint64

Examples

>>> A = ak.arange(10)
>>> ak.rotl(A, A)
array([0, 2, 8, 24, 64, 160, 384, 896, 2048, 4608])
arkouda.rotr(x, rot) pdarray[source]

Rotate bits of <x> to the left by <rot>.

Parameters:
  • x (pdarray(int64/uint64) or integer) – Value(s) to rotate left.

  • rot (pdarray(int64/uint64) or integer) – Amount(s) to rotate by.

Returns:

rotated – The rotated elements of x.

Return type:

pdarray(int64/uint64)

Raises:

TypeError – If input array is not int64 or uint64

Examples

>>> A = ak.arange(10)
>>> ak.rotr(1024 * A, A)
array([0, 512, 512, 384, 256, 160, 96, 56, 32, 18])
arkouda.round(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise rounding of the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing input array elements rounded to the nearest integer

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Examples

>>> ak.round(ak.array([1.1, 2.5, 3.14159]))
array([1, 3, 3])
arkouda.save_all(columns: Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView] | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView], prefix_path: str, names: List[str] | None = None, file_format='HDF5', mode: str = 'truncate', file_type: str = 'distribute', compression: str | None = None) None[source]

DEPRECATED Save multiple named pdarrays to HDF5/Parquet files. :param columns: Collection of arrays to save :type columns: dict or list of pdarrays :param prefix_path: Directory and filename prefix for output files :type prefix_path: str :param names: Dataset names for the pdarrays :type names: list of str :param file_format: ‘HDF5’ or ‘Parquet’. Defaults to hdf5 :type file_format: str :param mode: By default, truncate (overwrite) the output files if they exist.

If ‘append’, attempt to create new dataset in existing files.

Parameters:
  • file_type (str ("single" | "distribute")) – Default: distribute Single writes the dataset to a single file Distribute writes the dataset to a file per locale Only used with HDF5

  • compression (str (None | "snappy" | "gzip" | "brotli" | "zstd" | "lz4")) – Optional Select the compression to use with Parquet files. Only used with Parquet.

Return type:

None

Raises:

ValueError – Raised if (1) the lengths of columns and values differ or (2) the mode is not ‘truncate’ or ‘append’

See also

save, load_all, to_parquet, to_hdf

Notes

Creates one file per locale containing that locale’s chunk of each pdarray. If columns is a dictionary, the keys are used as the HDF5 dataset names. Otherwise, if no names are supplied, 0-up integers are used. By default, any existing files at path_prefix will be overwritten, unless the user specifies the ‘append’ mode, in which case arkouda will attempt to add <columns> as new datasets to existing files. If the wrong number of files is present or dataset names already exist, a RuntimeError is raised.

Examples

>>> a = ak.arange(25)
>>> b = ak.arange(25)
>>> # Save with mapping defining dataset names
>>> ak.save_all({'a': a, 'b': b}, 'path/name_prefix', file_format='Parquet')
>>> # Save using names instead of mapping
>>> ak.save_all([a, b], 'path/name_prefix', names=['a', 'b'], file_format='Parquet')
arkouda.search_intervals(vals, intervals, tiebreak=None, hierarchical=True)[source]

Given an array of query vals and non-overlapping, closed intervals, return the index of the best (see tiebreak) interval containing each query value, or -1 if not present in any interval.

Parameters:
  • vals ((sequence of) pdarray(int, uint, float)) – Values to search for in intervals. If multiple arrays, each “row” is an item.

  • intervals (2-tuple of (sequences of) pdarrays) – Non-overlapping, half-open intervals, as a tuple of (lower_bounds_inclusive, upper_bounds_exclusive) Must have same dtype(s) as vals.

  • tiebreak ((optional) pdarray, numeric) – When a value is present in more than one interval, the interval with the lowest tiebreak value will be chosen. If no tiebreak is given, the first containing interval will be chosen.

  • hierarchical (boolean) – When True, sequences of pdarrays will be treated as components specifying a single dimension (i.e. hierarchical) When False, sequences of pdarrays will be specifying multi-dimensional intervals

Returns:

idx – Index of interval containing each query value, or -1 if not found

Return type:

pdarray(int64)

Notes

The return idx satisfies the following condition:

present = idx > -1 ((intervals[0][idx[present]] <= vals[present]) &

(intervals[1][idx[present]] >= vals[present])).all()

Examples

>>> starts = (ak.array([0, 5]), ak.array([0, 11]))
>>> ends = (ak.array([5, 9]), ak.array([10, 20]))
>>> vals = (ak.array([0, 0, 2, 5, 5, 6, 6, 9]), ak.array([0, 20, 1, 5, 15, 0, 12, 30]))
>>> ak.search_intervals(vals, (starts, ends), hierarchical=False)
array([0 -1 0 0 1 -1 1 -1])
>>> ak.search_intervals(vals, (starts, ends))
array([0 0 0 0 1 1 1 -1])
>>> bi_starts = ak.bigint_from_uint_arrays([ak.cast(a, ak.uint64) for a in starts])
>>> bi_ends = ak.bigint_from_uint_arrays([ak.cast(a, ak.uint64) for a in ends])
>>> bi_vals = ak.bigint_from_uint_arrays([ak.cast(a, ak.uint64) for a in vals])
>>> bi_starts, bi_ends, bi_vals
(array(["0" "92233720368547758091"]),
array(["92233720368547758090" "166020696663385964564"]),
array(["0" "20" "36893488147419103233" "92233720368547758085" "92233720368547758095"
"110680464442257309696" "110680464442257309708" "166020696663385964574"]))
>>> ak.search_intervals(bi_vals, (bi_starts, bi_ends))
array([0 0 0 0 1 1 1 -1])
arkouda.segarray(segments: arkouda.pdarrayclass.pdarray, values: arkouda.pdarrayclass.pdarray, lengths=None, grouping=None)[source]

Alias for the from_parts function. Prevents user from needing to call ak.SegArray constructor DEPRECATED

arkouda.setdiff1d(pda1: arkouda.groupbyclass.groupable, pda2: arkouda.groupbyclass.groupable, assume_unique: bool = False) arkouda.pdarrayclass.pdarray | arkouda.groupbyclass.groupable[source]

Find the set difference of two arrays.

Return the sorted, unique values in pda1 that are not in pda2.

Parameters:
  • pda1 (pdarray/Sequence[pdarray, Strings, Categorical]) – Input array/Sequence of groupable objects

  • pda2 (pdarray/List) – Input array/sequence of groupable objects

  • assume_unique (bool) – If True, the input arrays are both assumed to be unique, which can speed up the calculation. Default is False.

Returns:

Sorted 1D array/List of sorted pdarrays of values in pda1 that are not in pda2.

Return type:

pdarray/groupable

Raises:
  • TypeError – Raised if either pda1 or pda2 is not a pdarray

  • RuntimeError – Raised if the dtype of either pdarray is not supported

Notes

ak.setdiff1d is not supported for bool or float64 pdarrays

Examples

>>> a = ak.array([1, 2, 3, 2, 4, 1])
>>> b = ak.array([3, 4, 5, 6])
>>> ak.setdiff1d(a, b)
array([1, 2])
#Multi-Array Example
>>> a = ak.arange(1, 6)
>>> b = ak.array([1, 5, 3, 4, 2])
>>> c = ak.array([1, 4, 3, 2, 5])
>>> d = ak.array([1, 2, 3, 5, 4])
>>> multia = [a, a, a]
>>> multib = [b, c, d]
>>> ak.setdiff1d(multia, multib)
[array([2, 4, 5]), array([2, 4, 5]), array([2, 4, 5])]
arkouda.setxor1d(pda1: arkouda.groupbyclass.groupable, pda2: arkouda.groupbyclass.groupable, assume_unique: bool = False) arkouda.pdarrayclass.pdarray | arkouda.groupbyclass.groupable[source]

Find the set exclusive-or (symmetric difference) of two arrays.

Return the sorted, unique values that are in only one (not both) of the input arrays.

Parameters:
  • pda1 (pdarray/Sequence[pdarray, Strings, Categorical]) – Input array/Sequence of groupable objects

  • pda2 (pdarray/List) – Input array/sequence of groupable objects

  • assume_unique (bool) – If True, the input arrays are both assumed to be unique, which can speed up the calculation. Default is False.

Returns:

Sorted 1D array/List of sorted pdarrays of unique values that are in only one of the input arrays.

Return type:

pdarray/groupable

Raises:
  • TypeError – Raised if either pda1 or pda2 is not a pdarray

  • RuntimeError – Raised if the dtype of either pdarray is not supported

Notes

ak.setxor1d is not supported for bool or float64 pdarrays

Examples

>>> a = ak.array([1, 2, 3, 2, 4])
>>> b = ak.array([2, 3, 5, 7, 5])
>>> ak.setxor1d(a,b)
array([1, 4, 5, 7])
#Multi-Array Example
>>> a = ak.arange(1, 6)
>>> b = ak.array([1, 5, 3, 4, 2])
>>> c = ak.array([1, 4, 3, 2, 5])
>>> d = ak.array([1, 2, 3, 5, 4])
>>> multia = [a, a, a]
>>> multib = [b, c, d]
>>> ak.setxor1d(multia, multib)
[array([2, 2, 4, 4, 5, 5]), array([2, 5, 2, 4, 4, 5]), array([2, 4, 5, 4, 2, 5])]
arkouda.sign(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise sign of the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing sign values of the input array elements

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Examples

>>> ak.sign(ak.array([-10, -5, 0, 5, 10]))
array([-1, -1, 0, 1, 1])
arkouda.sin(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Return the element-wise sine of the array.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the sine will be applied to the corresponding value. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing sin for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.sinh(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Return the element-wise hyperbolic sine of the array.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the hyperbolic sine will be applied to the corresponding value. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing hyperbolic sine for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.skew(pda: pdarray, bias: bool = True) numpy.float64[source]

Computes the sample skewness of an array. Skewness > 0 means there’s greater weight in the right tail of the distribution. Skewness < 0 means there’s greater weight in the left tail of the distribution. Skewness == 0 means the data is normally distributed. Based on the scipy.stats.skew function.

Parameters:
  • pda (pdarray) – A pdarray of values that will be calculated to find the skew

  • bias (bool, optional) – If False, then the calculations are corrected for statistical bias.

Returns:

The skew of all elements in the array

Return type:

np.float64

Examples: >>> a = ak.array([1, 1, 1, 5, 10]) >>> ak.skew(a) 0.9442193396379163

arkouda.snapshot(filename)[source]

Create a snapshot of the current Arkouda namespace. All currently accessible variables containing Arkouda objects will be written to an HDF5 file.

Unlike other save/load functions, this maintains the integrity of dataframes.

Current Variable names are used as the dataset name when saving.

Parameters:
  • filename (str)

  • file (Name to use when storing)

Return type:

None

See also

ak.restore

arkouda.sort(pda: arkouda.pdarrayclass.pdarray, algorithm: SortingAlgorithm = SortingAlgorithm.RadixSortLSD) arkouda.pdarrayclass.pdarray[source]

Return a sorted copy of the array. Only sorts numeric arrays; for Strings, use argsort.

Parameters:

pda (pdarray or Categorical) – The array to sort (int64, uint64, or float64)

Returns:

The sorted copy of pda

Return type:

pdarray, int64, uint64, or float64

Raises:
  • TypeError – Raised if the parameter is not a pdarray

  • ValueError – Raised if sort attempted on a pdarray with an unsupported dtype such as bool

See also

argsort

Notes

Uses a least-significant-digit radix sort, which is stable and resilient to non-uniformity in data but communication intensive.

Examples

>>> a = ak.randint(0, 10, 10)
>>> sorted = ak.sort(a)
>>> a
array([0, 1, 1, 3, 4, 5, 7, 8, 8, 9])
arkouda.sqrt(pda: pdarray, where: bool | pdarray = True) pdarray[source]

Takes the square root of array. If where is given, the operation will only take place in the positions where the where condition is True.

Parameters:
  • pda (pdarray) – A pdarray of values that will be square rooted

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the corresponding value will be square rooted. Elsewhere, it will retain its original value. Default set to True.

Returns:

  • pdarray

  • Returns a pdarray of square rooted values, under the boolean where condition.

Examples: >>> a = ak.arange(5) >>> ak.sqrt(a) array([0 1 1.4142135623730951 1.7320508075688772 2]) >>> ak.sqrt(a, ak.sqrt([True, True, False, False, True])) array([0, 1, 2, 3, 2])

arkouda.square(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise square of the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing square values of the input array elements

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Examples

>>> ak.square(ak.arange(1,5))
array([1, 4, 9, 16])
arkouda.standard_normal(size: arkouda.dtypes.int_scalars, seed: None | arkouda.dtypes.int_scalars = None) arkouda.pdarrayclass.pdarray[source]

Draw real numbers from the standard normal distribution.

Parameters:
  • size (int_scalars) – The number of samples to draw (size of the returned array)

  • seed (int_scalars) – Value used to initialize the random number generator

Returns:

The array of random numbers

Return type:

pdarray, float64

Raises:
  • TypeError – Raised if size is not an int

  • ValueError – Raised if size < 0

See also

randint

Notes

For random samples from \(N(\mu, \sigma^2)\), use:

(sigma * standard_normal(size)) + mu

Examples

>>> ak.standard_normal(3,1)
array([-0.68586185091150265, 1.1723810583573375, 0.567584107142031])
arkouda.standard_normal(size: arkouda.dtypes.int_scalars, seed: None | arkouda.dtypes.int_scalars = None) arkouda.pdarrayclass.pdarray[source]

Draw real numbers from the standard normal distribution.

Parameters:
  • size (int_scalars) – The number of samples to draw (size of the returned array)

  • seed (int_scalars) – Value used to initialize the random number generator

Returns:

The array of random numbers

Return type:

pdarray, float64

Raises:
  • TypeError – Raised if size is not an int

  • ValueError – Raised if size < 0

See also

randint

Notes

For random samples from \(N(\mu, \sigma^2)\), use:

(sigma * standard_normal(size)) + mu

Examples

>>> ak.standard_normal(3,1)
array([-0.68586185091150265, 1.1723810583573375, 0.567584107142031])
arkouda.std(pda: pdarray, ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Return the standard deviation of values in the array. The standard deviation is implemented as the square root of the variance.

Parameters:
  • pda (pdarray) – values for which to calculate the standard deviation

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating std

Returns:

The scalar standard deviation of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance or ddof is not an integer

  • ValueError – Raised if ddof is an integer < 0

  • RuntimeError – Raised if there’s a server-side error thrown

See also

mean, var

Notes

The standard deviation is the square root of the average of the squared deviations from the mean, i.e., std = sqrt(mean((x - x.mean())**2)).

The average squared deviation is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of the infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with ddof=1, it will not be an unbiased estimate of the standard deviation per se.

arkouda.str_
arkouda.str_
arkouda.str_scalars
arkouda.string_operators(cls)[source]
arkouda.sum(pda: pdarray) numpy.float64[source]

Return the sum of all elements in the array.

Parameters:

pda (pdarray) – Values for which to calculate the sum

Returns:

The sum of all elements in the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • RuntimeError – Raised if there’s a server-side error thrown

arkouda.tan(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Return the element-wise tangent of the array.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the tangent will be applied to the corresponding value. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing tangent for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.tanh(pda: arkouda.pdarrayclass.pdarray, where: bool | arkouda.pdarrayclass.pdarray = True) arkouda.pdarrayclass.pdarray[source]

Return the element-wise hyperbolic tangent of the array.

Parameters:
  • pda (pdarray)

  • where (Boolean or pdarray) – This condition is broadcast over the input. At locations where the condition is True, the hyperbolic tangent will be applied to the corresponding value. Elsewhere, it will retain its original value. Default set to True.

Returns:

A pdarray containing hyperbolic tangent for each element of the original pdarray

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

arkouda.timedelta_range(start=None, end=None, periods=None, freq=None, name=None, closed=None, **kwargs)[source]

Return a fixed frequency TimedeltaIndex, with day as the default frequency. Alias for ak.Timedelta(pd.timedelta_range(args)). Subject to size limit imposed by client.maxTransferBytes.

Parameters:
  • start (str or timedelta-like, default None) – Left bound for generating timedeltas.

  • end (str or timedelta-like, default None) – Right bound for generating timedeltas.

  • periods (int, default None) – Number of periods to generate.

  • freq (str or DateOffset, default 'D') – Frequency strings can have multiples, e.g. ‘5H’.

  • name (str, default None) – Name of the resulting TimedeltaIndex.

  • closed (str, default None) – Make the interval closed with respect to the given frequency to the ‘left’, ‘right’, or both sides (None).

Returns:

rng

Return type:

TimedeltaIndex

Notes

Of the four parameters start, end, periods, and freq, exactly three must be specified. If freq is omitted, the resulting TimedeltaIndex will have periods linearly spaced elements between start and end (closed on both sides).

To learn more about the frequency strings, please see this link.

arkouda.timedelta_range(start=None, end=None, periods=None, freq=None, name=None, closed=None, **kwargs)[source]

Return a fixed frequency TimedeltaIndex, with day as the default frequency. Alias for ak.Timedelta(pd.timedelta_range(args)). Subject to size limit imposed by client.maxTransferBytes.

Parameters:
  • start (str or timedelta-like, default None) – Left bound for generating timedeltas.

  • end (str or timedelta-like, default None) – Right bound for generating timedeltas.

  • periods (int, default None) – Number of periods to generate.

  • freq (str or DateOffset, default 'D') – Frequency strings can have multiples, e.g. ‘5H’.

  • name (str, default None) – Name of the resulting TimedeltaIndex.

  • closed (str, default None) – Make the interval closed with respect to the given frequency to the ‘left’, ‘right’, or both sides (None).

Returns:

rng

Return type:

TimedeltaIndex

Notes

Of the four parameters start, end, periods, and freq, exactly three must be specified. If freq is omitted, the resulting TimedeltaIndex will have periods linearly spaced elements between start and end (closed on both sides).

To learn more about the frequency strings, please see this link.

arkouda.to_csv(columns: Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings] | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings], prefix_path: str, names: List[str] | None = None, col_delim: str = ',', overwrite: bool = False)[source]

Write Arkouda object(s) to CSV file(s). All CSV Files written by Arkouda include a header denoting data types of the columns.

Parameters:
  • columns (Mapping[str, pdarray] or List[pdarray]) – The objects to be written to CSV file. If a mapping is used and names is None the keys of the mapping will be used as the dataset names.

  • prefix_path (str) – The filename prefix to be used for saving files. Files will have _LOCALE#### appended when they are written to disk.

  • names (List[str] (Optional)) – names of dataset to be written. Order should correspond to the order of data provided in columns.

  • col_delim (str) – Defaults to “,”. Value to be used to separate columns within the file. Please be sure that the value used DOES NOT appear in your dataset.

  • overwrite (bool) – Defaults to False. If True, any existing files matching your provided prefix_path will be overwritten. If False, an error will be returned if existing files are found.

Return type:

None

Raises:
  • ValueError – Raised if any datasets are present in all csv files or if one or more of the specified files do not exist

  • RuntimeError – Raised if one or more of the specified files cannot be opened. If allow_errors is true this may be raised if no values are returned from the server.

  • TypeError – Raised if we receive an unknown arkouda_type returned from the server

See also

read_csv

Notes

  • CSV format is not currently supported by load/load_all operations

  • The column delimiter is expected to be the same for column names and data

  • Be sure that column delimiters are not found within your data.

  • All CSV files must delimit rows using newline (\n) at this time.

  • Unlike other file formats, CSV files store Strings as their UTF-8 format instead of storing bytes as uint(8).

arkouda.to_hdf(columns: Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView] | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView], prefix_path: str, names: List[str] | None = None, mode: str = 'truncate', file_type: str = 'distribute') None[source]

Save multiple named pdarrays to HDF5 files.

Parameters:
  • columns (dict or list of pdarrays) – Collection of arrays to save

  • prefix_path (str) – Directory and filename prefix for output files

  • names (list of str) – Dataset names for the pdarrays

  • mode ({'truncate' | 'append'}) – By default, truncate (overwrite) the output files if they exist. If ‘append’, attempt to create new dataset in existing files.

  • file_type (str ("single" | "distribute")) – Default: distribute Single writes the dataset to a single file Distribute writes the dataset to a file per locale

Return type:

None

Raises:
  • ValueError – Raised if (1) the lengths of columns and values differ or (2) the mode is not ‘truncate’ or ‘append’

  • RuntimeError – Raised if a server-side error is thrown saving the pdarray

Notes

Creates one file per locale containing that locale’s chunk of each pdarray. If columns is a dictionary, the keys are used as the HDF5 dataset names. Otherwise, if no names are supplied, 0-up integers are used. By default, any existing files at path_prefix will be overwritten, unless the user specifies the ‘append’ mode, in which case arkouda will attempt to add <columns> as new datasets to existing files. If the wrong number of files is present or dataset names already exist, a RuntimeError is raised.

Examples

>>> a = ak.arange(25)
>>> b = ak.arange(25)
>>> # Save with mapping defining dataset names
>>> ak.to_hdf({'a': a, 'b': b}, 'path/name_prefix')
>>> # Save using names instead of mapping
>>> ak.to_hdf([a, b], 'path/name_prefix', names=['a', 'b'])
arkouda.to_parquet(columns: Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView] | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView], prefix_path: str, names: List[str] | None = None, mode: str = 'truncate', compression: str | None = None, convert_categoricals: bool = False) None[source]

Save multiple named pdarrays to Parquet files.

Parameters:
  • columns (dict or list of pdarrays) – Collection of arrays to save

  • prefix_path (str) – Directory and filename prefix for output files

  • names (list of str) – Dataset names for the pdarrays

  • mode ({'truncate' | 'append'}) – By default, truncate (overwrite) the output files if they exist. If ‘append’, attempt to create new dataset in existing files. ‘append’ is deprecated, please use the multi-column write

  • compression (str (Optional)) –

    Default None

    Provide the compression type to use when writing the file. Supported values: snappy, gzip, brotli, zstd, lz4

    convert_categoricals: bool

    Defaults to False Parquet requires all columns to be the same size and Categoricals don’t satisfy that requirement. if set, write the equivalent Strings in place of any Categorical columns.

Return type:

None

Raises:
  • ValueError – Raised if (1) the lengths of columns and values differ or (2) the mode is not ‘truncate’ or ‘append’

  • RuntimeError – Raised if a server-side error is thrown saving the pdarray

See also

to_hdf, load, load_all, read

Notes

Creates one file per locale containing that locale’s chunk of each pdarray. If columns is a dictionary, the keys are used as the Parquet column names. Otherwise, if no names are supplied, 0-up integers are used. By default, any existing files at path_prefix will be overwritten, unless the user specifies the ‘append’ mode, in which case arkouda will attempt to add <columns> as new datasets to existing files. If the wrong number of files is present or dataset names already exist, a RuntimeError is raised.

Examples

>>> a = ak.arange(25)
>>> b = ak.arange(25)
>>> # Save with mapping defining dataset names
>>> ak.to_parquet({'a': a, 'b': b}, 'path/name_prefix')
>>> # Save using names instead of mapping
>>> ak.to_parquet([a, b], 'path/name_prefix', names=['a', 'b'])
arkouda.translate_np_dtype(dt) Tuple[str, int][source]

Split numpy dtype dt into its kind and byte size, raising TypeError for unsupported dtypes.

Raises:

TypeError – Raised if the dtype is not in supported dtypes or if dt is not a np.dtype

arkouda.trunc(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Return the element-wise truncation of the array.

Parameters:

pda (pdarray)

Returns:

A pdarray containing input array elements truncated to the nearest integer

Return type:

pdarray

Raises:

TypeError – Raised if the parameter is not a pdarray

Examples

>>> ak.trunc(ak.array([1.1, 2.5, 3.14159]))
array([1, 2, 3])
arkouda.uint16
arkouda.uint32
arkouda.uint64
arkouda.uint8
arkouda.uniform(size: arkouda.dtypes.int_scalars, low: arkouda.dtypes.numeric_scalars = float(0.0), high: arkouda.dtypes.numeric_scalars = 1.0, seed: None | arkouda.dtypes.int_scalars = None) arkouda.pdarrayclass.pdarray[source]

Generate a pdarray with uniformly distributed random float values in a specified range.

Parameters:
  • low (float_scalars) – The low value (inclusive) of the range, defaults to 0.0

  • high (float_scalars) – The high value (inclusive) of the range, defaults to 1.0

  • size (int_scalars) – The length of the returned array

  • seed (int_scalars, optional) – Value used to initialize the random number generator

Returns:

Values drawn uniformly from the specified range

Return type:

pdarray, float64

Raises:
  • TypeError – Raised if dtype.name not in DTypes, size is not an int, or if either low or high is not an int or float

  • ValueError – Raised if size < 0 or if high < low

Notes

The logic for uniform is delegated to the ak.randint method which is invoked with a dtype of float64

Examples

>>> ak.uniform(3)
array([0.92176432277231968, 0.083130710959903542, 0.68894208386667544])
>>> ak.uniform(size=3,low=0,high=5,seed=0)
array([0.30013431967121934, 0.47383036230759112, 1.0441791878997098])
arkouda.uniform(size: arkouda.dtypes.int_scalars, low: arkouda.dtypes.numeric_scalars = float(0.0), high: arkouda.dtypes.numeric_scalars = 1.0, seed: None | arkouda.dtypes.int_scalars = None) arkouda.pdarrayclass.pdarray[source]

Generate a pdarray with uniformly distributed random float values in a specified range.

Parameters:
  • low (float_scalars) – The low value (inclusive) of the range, defaults to 0.0

  • high (float_scalars) – The high value (inclusive) of the range, defaults to 1.0

  • size (int_scalars) – The length of the returned array

  • seed (int_scalars, optional) – Value used to initialize the random number generator

Returns:

Values drawn uniformly from the specified range

Return type:

pdarray, float64

Raises:
  • TypeError – Raised if dtype.name not in DTypes, size is not an int, or if either low or high is not an int or float

  • ValueError – Raised if size < 0 or if high < low

Notes

The logic for uniform is delegated to the ak.randint method which is invoked with a dtype of float64

Examples

>>> ak.uniform(3)
array([0.92176432277231968, 0.083130710959903542, 0.68894208386667544])
>>> ak.uniform(size=3,low=0,high=5,seed=0)
array([0.30013431967121934, 0.47383036230759112, 1.0441791878997098])
arkouda.union1d(pda1: arkouda.groupbyclass.groupable, pda2: arkouda.groupbyclass.groupable) arkouda.pdarrayclass.pdarray | arkouda.groupbyclass.groupable[source]

Find the union of two arrays/List of Arrays.

Return the unique, sorted array of values that are in either of the two input arrays.

Parameters:
  • pda1 (pdarray/Sequence[pdarray, Strings, Categorical]) – Input array/Sequence of groupable objects

  • pda2 (pdarray/List) – Input array/sequence of groupable objects

Returns:

Unique, sorted union of the input arrays.

Return type:

pdarray/groupable

Raises:
  • TypeError – Raised if either pda1 or pda2 is not a pdarray

  • RuntimeError – Raised if the dtype of either array is not supported

Notes

ak.union1d is not supported for bool or float64 pdarrays

Examples

>>>
# 1D Example
>>> ak.union1d(ak.array([-1, 0, 1]), ak.array([-2, 0, 2]))
array([-2, -1, 0, 1, 2])
#Multi-Array Example
>>> a = ak.arange(1, 6)
>>> b = ak.array([1, 5, 3, 4, 2])
>>> c = ak.array([1, 4, 3, 2, 5])
>>> d = ak.array([1, 2, 3, 5, 4])
>>> multia = [a, a, a]
>>> multib = [b, c, d]
>>> ak.union1d(multia, multib)
[array[1, 2, 2, 3, 4, 4, 5, 5], array[1, 2, 5, 3, 2, 4, 4, 5], array[1, 2, 4, 3, 5, 4, 2, 5]]
arkouda.unique(pda: groupable, return_groups: bool = False, assume_sorted: bool = False, return_indices: bool = False) groupable | Tuple[groupable, arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray, int][source]

Find the unique elements of an array.

Returns the unique elements of an array, sorted if the values are integers. There is an optional output in addition to the unique elements: the number of times each unique value comes up in the input array.

Parameters:
  • pda ((list of) pdarray, Strings, or Categorical) – Input array.

  • return_groups (bool, optional) – If True, also return grouping information for the array.

  • return_indices (bool, optional) – Only applicable if return_groups is True. If True, return unique key indices along with other groups

  • assume_sorted (bool, optional) – If True, assume pda is sorted and skip sorting step

Returns:

  • unique ((list of) pdarray, Strings, or Categorical) – The unique values. If input dtype is int64, return values will be sorted.

  • permutation (pdarray, optional) – Permutation that groups equivalent values together (only when return_groups=True)

  • segments (pdarray, optional) – The offset of each group in the permuted array (only when return_groups=True)

Raises:
  • TypeError – Raised if pda is not a pdarray or Strings object

  • RuntimeError – Raised if the pdarray or Strings dtype is unsupported

Notes

For integer arrays, this function checks to see whether pda is sorted and, if so, whether it is already unique. This step can save considerable computation. Otherwise, this function will sort pda.

Examples

>>> A = ak.array([3, 2, 1, 1, 2, 3])
>>> ak.unique(A)
array([1, 2, 3])
arkouda.unique(pda: groupable, return_groups: bool = False, assume_sorted: bool = False, return_indices: bool = False) groupable | Tuple[groupable, arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray, int][source]

Find the unique elements of an array.

Returns the unique elements of an array, sorted if the values are integers. There is an optional output in addition to the unique elements: the number of times each unique value comes up in the input array.

Parameters:
  • pda ((list of) pdarray, Strings, or Categorical) – Input array.

  • return_groups (bool, optional) – If True, also return grouping information for the array.

  • return_indices (bool, optional) – Only applicable if return_groups is True. If True, return unique key indices along with other groups

  • assume_sorted (bool, optional) – If True, assume pda is sorted and skip sorting step

Returns:

  • unique ((list of) pdarray, Strings, or Categorical) – The unique values. If input dtype is int64, return values will be sorted.

  • permutation (pdarray, optional) – Permutation that groups equivalent values together (only when return_groups=True)

  • segments (pdarray, optional) – The offset of each group in the permuted array (only when return_groups=True)

Raises:
  • TypeError – Raised if pda is not a pdarray or Strings object

  • RuntimeError – Raised if the pdarray or Strings dtype is unsupported

Notes

For integer arrays, this function checks to see whether pda is sorted and, if so, whether it is already unique. This step can save considerable computation. Otherwise, this function will sort pda.

Examples

>>> A = ak.array([3, 2, 1, 1, 2, 3])
>>> ak.unique(A)
array([1, 2, 3])
arkouda.unique(pda: groupable, return_groups: bool = False, assume_sorted: bool = False, return_indices: bool = False) groupable | Tuple[groupable, arkouda.pdarrayclass.pdarray, arkouda.pdarrayclass.pdarray, int][source]

Find the unique elements of an array.

Returns the unique elements of an array, sorted if the values are integers. There is an optional output in addition to the unique elements: the number of times each unique value comes up in the input array.

Parameters:
  • pda ((list of) pdarray, Strings, or Categorical) – Input array.

  • return_groups (bool, optional) – If True, also return grouping information for the array.

  • return_indices (bool, optional) – Only applicable if return_groups is True. If True, return unique key indices along with other groups

  • assume_sorted (bool, optional) – If True, assume pda is sorted and skip sorting step

Returns:

  • unique ((list of) pdarray, Strings, or Categorical) – The unique values. If input dtype is int64, return values will be sorted.

  • permutation (pdarray, optional) – Permutation that groups equivalent values together (only when return_groups=True)

  • segments (pdarray, optional) – The offset of each group in the permuted array (only when return_groups=True)

Raises:
  • TypeError – Raised if pda is not a pdarray or Strings object

  • RuntimeError – Raised if the pdarray or Strings dtype is unsupported

Notes

For integer arrays, this function checks to see whether pda is sorted and, if so, whether it is already unique. This step can save considerable computation. Otherwise, this function will sort pda.

Examples

>>> A = ak.array([3, 2, 1, 1, 2, 3])
>>> ak.unique(A)
array([1, 2, 3])
arkouda.unregister(name: str) str[source]
arkouda.unregister_all(names: list)[source]

Unregister all names provided

Parameters:

names (list) – List of names used to register objects to be unregistered

Return type:

None

arkouda.unregister_pdarray_by_name(user_defined_name: str) None[source]

Unregister a named pdarray in the arkouda server which was previously registered using register() and/or attahced to using attach_pdarray()

Parameters:

user_defined_name (str) – user defined name which array was registered under

Return type:

None

Raises:

RuntimeError – Raised if the server could not find the internal name/symbol to remove

Notes

Registered names/pdarrays in the server are immune to deletion until they are unregistered.

Examples

>>> a = zeros(100)
>>> a.register("my_zeros")
>>> # potentially disconnect from server and reconnect to server
>>> b = ak.attach_pdarray("my_zeros")
>>> # ...other work...
>>> ak.unregister_pdarray_by_name(b)
arkouda.unsqueeze(p)[source]
arkouda.update_hdf(columns: Mapping[str, arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView] | List[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.segarray.SegArray | arkouda.array_view.ArrayView], prefix_path: str, names: List[str] | None = None, repack: bool = True)[source]

Overwrite the datasets with name appearing in names or keys in columns if columns is a dictionary

Parameters:
  • columns (dict or list of pdarrays) – Collection of arrays to save

  • prefix_path (str) – Directory and filename prefix for output files

  • names (list of str) – Dataset names for the pdarrays

  • repack (bool) – Default: True HDF5 does not release memory on delete. When True, the inaccessible data (that was overwritten) is removed. When False, the data remains, but is inaccessible. Setting to false will yield better performance, but will cause file sizes to expand.

Raises:

RuntimeError – Raised if a server-side error is thrown saving the datasets

Notes

  • If file does not contain File_Format attribute to indicate how it was saved, the file name is checked for _LOCALE#### to determine if it is distributed.

  • If the datasets provided do not exist, they will be added

  • Because HDF5 deletes do not release memory, this will create a copy of the file with the new data

  • This workflow is slightly different from to_hdf to prevent reading and creating a copy of the file for each dataset

arkouda.value_counts(pda: arkouda.pdarrayclass.pdarray) Categorical | Tuple[arkouda.pdarrayclass.pdarray | arkouda.strings.Strings, arkouda.pdarrayclass.pdarray | None][source]

Count the occurrences of the unique values of an array.

Parameters:

pda (pdarray, int64) – The array of values to count

Returns:

  • unique_values (pdarray, int64 or Strings) – The unique values, sorted in ascending order

  • counts (pdarray, int64) – The number of times the corresponding unique value occurs

Raises:

TypeError – Raised if the parameter is not a pdarray

See also

unique, histogram

Notes

This function differs from histogram() in that it only returns counts for values that are present, leaving out empty “bins”. This function delegates all logic to the unique() method where the return_counts parameter is set to True.

Examples

>>> A = ak.array([2, 0, 2, 4, 0, 0])
>>> ak.value_counts(A)
(array([0, 2, 4]), array([3, 2, 1]))
arkouda.var(pda: pdarray, ddof: arkouda.dtypes.int_scalars = 0) numpy.float64[source]

Return the variance of values in the array.

Parameters:
  • pda (pdarray) – Values for which to calculate the variance

  • ddof (int_scalars) – “Delta Degrees of Freedom” used in calculating var

Returns:

The scalar variance of the array

Return type:

np.float64

Raises:
  • TypeError – Raised if pda is not a pdarray instance

  • ValueError – Raised if the ddof >= pdarray size

  • RuntimeError – Raised if there’s a server-side error thrown

See also

mean, std

Notes

The variance is the average of the squared deviations from the mean, i.e., var = mean((x - x.mean())**2).

The mean is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables.

arkouda.where(condition: arkouda.pdarrayclass.pdarray, A: str | arkouda.dtypes.numeric_scalars | arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical, B: str | arkouda.dtypes.numeric_scalars | arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical[source]

Returns an array with elements chosen from A and B based upon a conditioning array. As is the case with numpy.where, the return array consists of values from the first array (A) where the conditioning array elements are True and from the second array (B) where the conditioning array elements are False.

Parameters:
Returns:

Values chosen from A where the condition is True and B where the condition is False

Return type:

pdarray

Raises:
  • TypeError – Raised if the condition object is not a pdarray, if A or B is not an int, np.int64, float, np.float64, pdarray, str, Strings, Categorical if pdarray dtypes are not supported or do not match, or multiple condition clauses (see Notes section) are applied

  • ValueError – Raised if the shapes of the condition, A, and B pdarrays are unequal

Examples

>>> a1 = ak.arange(1,10)
>>> a2 = ak.ones(9, dtype=np.int64)
>>> cond = a1 < 5
>>> ak.where(cond,a1,a2)
array([1, 2, 3, 4, 1, 1, 1, 1, 1])
>>> a1 = ak.arange(1,10)
>>> a2 = ak.ones(9, dtype=np.int64)
>>> cond = a1 == 5
>>> ak.where(cond,a1,a2)
array([1, 1, 1, 1, 5, 1, 1, 1, 1])
>>> a1 = ak.arange(1,10)
>>> a2 = 10
>>> cond = a1 < 5
>>> ak.where(cond,a1,a2)
array([1, 2, 3, 4, 10, 10, 10, 10, 10])
>>> s1 = ak.array([f'str {i}' for i in range(10)])
>>> s2 = 'str 21'
>>> cond = (ak.arange(10) % 2 == 0)
>>> ak.where(cond,s1,s2)
array(['str 0', 'str 21', 'str 2', 'str 21', 'str 4', 'str 21', 'str 6', 'str 21', 'str 8','str 21'])
>>> c1 = ak.Categorical(ak.array([f'str {i}' for i in range(10)]))
>>> c2 = ak.Categorical(ak.array([f'str {i}' for i in range(9, -1, -1)]))
>>> cond = (ak.arange(10) % 2 == 0)
>>> ak.where(cond,c1,c2)
array(['str 0', 'str 8', 'str 2', 'str 6', 'str 4', 'str 4', 'str 6', 'str 2', 'str 8', 'str 0'])

Notes

A and B must have the same dtype and only one conditional clause is supported e.g., n < 5, n > 1, which is supported in numpy is not currently supported in Arkouda

arkouda.where(condition: arkouda.pdarrayclass.pdarray, A: str | arkouda.dtypes.numeric_scalars | arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical, B: str | arkouda.dtypes.numeric_scalars | arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical[source]

Returns an array with elements chosen from A and B based upon a conditioning array. As is the case with numpy.where, the return array consists of values from the first array (A) where the conditioning array elements are True and from the second array (B) where the conditioning array elements are False.

Parameters:
Returns:

Values chosen from A where the condition is True and B where the condition is False

Return type:

pdarray

Raises:
  • TypeError – Raised if the condition object is not a pdarray, if A or B is not an int, np.int64, float, np.float64, pdarray, str, Strings, Categorical if pdarray dtypes are not supported or do not match, or multiple condition clauses (see Notes section) are applied

  • ValueError – Raised if the shapes of the condition, A, and B pdarrays are unequal

Examples

>>> a1 = ak.arange(1,10)
>>> a2 = ak.ones(9, dtype=np.int64)
>>> cond = a1 < 5
>>> ak.where(cond,a1,a2)
array([1, 2, 3, 4, 1, 1, 1, 1, 1])
>>> a1 = ak.arange(1,10)
>>> a2 = ak.ones(9, dtype=np.int64)
>>> cond = a1 == 5
>>> ak.where(cond,a1,a2)
array([1, 1, 1, 1, 5, 1, 1, 1, 1])
>>> a1 = ak.arange(1,10)
>>> a2 = 10
>>> cond = a1 < 5
>>> ak.where(cond,a1,a2)
array([1, 2, 3, 4, 10, 10, 10, 10, 10])
>>> s1 = ak.array([f'str {i}' for i in range(10)])
>>> s2 = 'str 21'
>>> cond = (ak.arange(10) % 2 == 0)
>>> ak.where(cond,s1,s2)
array(['str 0', 'str 21', 'str 2', 'str 21', 'str 4', 'str 21', 'str 6', 'str 21', 'str 8','str 21'])
>>> c1 = ak.Categorical(ak.array([f'str {i}' for i in range(10)]))
>>> c2 = ak.Categorical(ak.array([f'str {i}' for i in range(9, -1, -1)]))
>>> cond = (ak.arange(10) % 2 == 0)
>>> ak.where(cond,c1,c2)
array(['str 0', 'str 8', 'str 2', 'str 6', 'str 4', 'str 4', 'str 6', 'str 2', 'str 8', 'str 0'])

Notes

A and B must have the same dtype and only one conditional clause is supported e.g., n < 5, n > 1, which is supported in numpy is not currently supported in Arkouda

arkouda.where(condition: arkouda.pdarrayclass.pdarray, A: str | arkouda.dtypes.numeric_scalars | arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical, B: str | arkouda.dtypes.numeric_scalars | arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical) arkouda.pdarrayclass.pdarray | arkouda.strings.Strings | arkouda.categorical.Categorical[source]

Returns an array with elements chosen from A and B based upon a conditioning array. As is the case with numpy.where, the return array consists of values from the first array (A) where the conditioning array elements are True and from the second array (B) where the conditioning array elements are False.

Parameters:
Returns:

Values chosen from A where the condition is True and B where the condition is False

Return type:

pdarray

Raises:
  • TypeError – Raised if the condition object is not a pdarray, if A or B is not an int, np.int64, float, np.float64, pdarray, str, Strings, Categorical if pdarray dtypes are not supported or do not match, or multiple condition clauses (see Notes section) are applied

  • ValueError – Raised if the shapes of the condition, A, and B pdarrays are unequal

Examples

>>> a1 = ak.arange(1,10)
>>> a2 = ak.ones(9, dtype=np.int64)
>>> cond = a1 < 5
>>> ak.where(cond,a1,a2)
array([1, 2, 3, 4, 1, 1, 1, 1, 1])
>>> a1 = ak.arange(1,10)
>>> a2 = ak.ones(9, dtype=np.int64)
>>> cond = a1 == 5
>>> ak.where(cond,a1,a2)
array([1, 1, 1, 1, 5, 1, 1, 1, 1])
>>> a1 = ak.arange(1,10)
>>> a2 = 10
>>> cond = a1 < 5
>>> ak.where(cond,a1,a2)
array([1, 2, 3, 4, 10, 10, 10, 10, 10])
>>> s1 = ak.array([f'str {i}' for i in range(10)])
>>> s2 = 'str 21'
>>> cond = (ak.arange(10) % 2 == 0)
>>> ak.where(cond,s1,s2)
array(['str 0', 'str 21', 'str 2', 'str 21', 'str 4', 'str 21', 'str 6', 'str 21', 'str 8','str 21'])
>>> c1 = ak.Categorical(ak.array([f'str {i}' for i in range(10)]))
>>> c2 = ak.Categorical(ak.array([f'str {i}' for i in range(9, -1, -1)]))
>>> cond = (ak.arange(10) % 2 == 0)
>>> ak.where(cond,c1,c2)
array(['str 0', 'str 8', 'str 2', 'str 6', 'str 4', 'str 4', 'str 6', 'str 2', 'str 8', 'str 0'])

Notes

A and B must have the same dtype and only one conditional clause is supported e.g., n < 5, n > 1, which is supported in numpy is not currently supported in Arkouda

arkouda.write_log(log_msg: str, tag: str = 'ClientGeneratedLog', log_lvl: LogLevel = LogLevel.INFO)[source]

Allows the user to write custom logs.

Parameters:
  • log_msg (str) – The message to be added to the server log

  • tag (str) – The tag to use in the log. This takes the place of the server function name. Allows for easy identification of custom logs. Defaults to “ClientGeneratedLog”

  • log_lvl (LogLevel) – The type of log to be written Defaults to LogLevel.INFO

See also

LogLevel

arkouda.xlogy(x: arkouda.pdarrayclass.pdarray | numpy.float64, y: arkouda.pdarrayclass.pdarray)[source]

Computes x * log(y).

Parameters:
  • x (pdarray or np.float64) – x must have a datatype that is castable to float64

  • y (pdarray)

Return type:

arkouda.pdarrayclass.pdarray

Examples

>>> import arkouda as ak
>>> ak.connect()
>>> from arkouda.scipy.special import xlogy
>>> xlogy( ak.array([1, 2, 3, 4]),  ak.array([5,6,7,8]))
array([1.6094379124341003 3.5835189384561099 5.8377304471659395 8.317766166719343])
>>> xlogy( 5.0, ak.array([1, 2, 3, 4]))
array([0.00000000000000000 3.4657359027997265 5.4930614433405491 6.9314718055994531])
arkouda.zero_up(vals)[source]

Map an array of sparse values to 0-up indices.

Parameters:

vals (pdarray) – Array to map to dense index

Returns:

aligned – Array with values replaced by 0-up indices

Return type:

pdarray

arkouda.zeros(size: arkouda.dtypes.int_scalars | str, dtype: numpy.dtype | type | str | arkouda.dtypes.BigInt = float64, max_bits: int | None = None) arkouda.pdarrayclass.pdarray[source]

Create a pdarray filled with zeros.

Parameters:
  • size (int_scalars) – Size of the array (only rank-1 arrays supported)

  • dtype (all_scalars) – Type of resulting array, default float64

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Zeros of the requested size and dtype

Return type:

pdarray

Raises:

TypeError – Raised if the supplied dtype is not supported or if the size parameter is neither an int nor a str that is parseable to an int.

See also

ones, zeros_like

Examples

>>> ak.zeros(5, dtype=ak.int64)
array([0, 0, 0, 0, 0])
>>> ak.zeros(5, dtype=ak.float64)
array([0, 0, 0, 0, 0])
>>> ak.zeros(5, dtype=ak.bool)
array([False, False, False, False, False])
arkouda.zeros(size: arkouda.dtypes.int_scalars | str, dtype: numpy.dtype | type | str | arkouda.dtypes.BigInt = float64, max_bits: int | None = None) arkouda.pdarrayclass.pdarray[source]

Create a pdarray filled with zeros.

Parameters:
  • size (int_scalars) – Size of the array (only rank-1 arrays supported)

  • dtype (all_scalars) – Type of resulting array, default float64

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Zeros of the requested size and dtype

Return type:

pdarray

Raises:

TypeError – Raised if the supplied dtype is not supported or if the size parameter is neither an int nor a str that is parseable to an int.

See also

ones, zeros_like

Examples

>>> ak.zeros(5, dtype=ak.int64)
array([0, 0, 0, 0, 0])
>>> ak.zeros(5, dtype=ak.float64)
array([0, 0, 0, 0, 0])
>>> ak.zeros(5, dtype=ak.bool)
array([False, False, False, False, False])
arkouda.zeros(size: arkouda.dtypes.int_scalars | str, dtype: numpy.dtype | type | str | arkouda.dtypes.BigInt = float64, max_bits: int | None = None) arkouda.pdarrayclass.pdarray[source]

Create a pdarray filled with zeros.

Parameters:
  • size (int_scalars) – Size of the array (only rank-1 arrays supported)

  • dtype (all_scalars) – Type of resulting array, default float64

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Zeros of the requested size and dtype

Return type:

pdarray

Raises:

TypeError – Raised if the supplied dtype is not supported or if the size parameter is neither an int nor a str that is parseable to an int.

See also

ones, zeros_like

Examples

>>> ak.zeros(5, dtype=ak.int64)
array([0, 0, 0, 0, 0])
>>> ak.zeros(5, dtype=ak.float64)
array([0, 0, 0, 0, 0])
>>> ak.zeros(5, dtype=ak.bool)
array([False, False, False, False, False])
arkouda.zeros(size: arkouda.dtypes.int_scalars | str, dtype: numpy.dtype | type | str | arkouda.dtypes.BigInt = float64, max_bits: int | None = None) arkouda.pdarrayclass.pdarray[source]

Create a pdarray filled with zeros.

Parameters:
  • size (int_scalars) – Size of the array (only rank-1 arrays supported)

  • dtype (all_scalars) – Type of resulting array, default float64

  • max_bits (int) – Specifies the maximum number of bits; only used for bigint pdarrays

Returns:

Zeros of the requested size and dtype

Return type:

pdarray

Raises:

TypeError – Raised if the supplied dtype is not supported or if the size parameter is neither an int nor a str that is parseable to an int.

See also

ones, zeros_like

Examples

>>> ak.zeros(5, dtype=ak.int64)
array([0, 0, 0, 0, 0])
>>> ak.zeros(5, dtype=ak.float64)
array([0, 0, 0, 0, 0])
>>> ak.zeros(5, dtype=ak.bool)
array([False, False, False, False, False])
arkouda.zeros_like(pda: arkouda.pdarrayclass.pdarray) arkouda.pdarrayclass.pdarray[source]

Create a zero-filled pdarray of the same size and dtype as an existing pdarray.

Parameters:

pda (pdarray) – Array to use for size and dtype

Returns:

Equivalent to ak.zeros(pda.size, pda.dtype)

Return type:

pdarray

Raises:

TypeError – Raised if the pda parameter is not a pdarray.

See also

zeros, ones_like

Examples

>>> zeros = ak.zeros(5, dtype=ak.int64)
>>> ak.zeros_like(zeros)
array([0, 0, 0, 0, 0])
>>> zeros = ak.zeros(5, dtype=ak.float64)
>>> ak.zeros_like(zeros)
array([0, 0, 0, 0, 0])
>>> zeros = ak.zeros(5, dtype=ak.bool)
>>> ak.zeros_like(zeros)
array([False, False, False, False, False])