Sparse

Logo

This implements sparse arrays of arbitrary dimension on top of numpy and scipy.sparse. It generalizes the scipy.sparse.coo_matrix and scipy.sparse.dok_matrix layouts, but extends beyond just rows and columns to an arbitrary number of dimensions.

Additionally, this project maintains compatibility with the numpy.ndarray interface rather than the numpy.matrix interface used in scipy.sparse

These differences make this project useful in certain situations where scipy.sparse matrices are not well suited, but it should not be considered a full replacement. It lacks layouts that are not easily generalized like CSR/CSC and depends on scipy.sparse for some computations.

Motivation

Sparse arrays, or arrays that are mostly empty or filled with zeros, are common in many scientific applications. To save space we often avoid storing these arrays in traditional dense formats, and instead choose different data structures. Our choice of data structure can significantly affect our storage and computational costs when working with these arrays.

Design

The main data structure in this library follows the Coordinate List (COO) layout for sparse matrices, but extends it to multiple dimensions.

The COO layout, which stores the row index, column index, and value of every element:

row

col

data

0

0

10

0

2

13

1

3

9

3

8

21

It is straightforward to extend the COO layout to an arbitrary number of dimensions:

dim1

dim2

dim3

data

0

0

0

.

10

0

0

3

.

13

0

2

2

.

9

3

1

4

.

21

This makes it easy to store a multidimensional sparse array, but we still need to reimplement all of the array operations like transpose, reshape, slicing, tensordot, reductions, etc., which can be challenging in general.

Fortunately in many cases we can leverage the existing scipy.sparse algorithms if we can intelligently transpose and reshape our multi-dimensional array into an appropriate 2-d sparse matrix, perform a modified sparse matrix operation, and then reshape and transpose back. These reshape and transpose operations can all be done at numpy speeds by modifying the arrays of coordinates. After scipy.sparse runs its operations (often written in C) then we can convert back to using the same path of reshapings and transpositions in reverse.

LICENSE

This library is licensed under BSD-3

Install

You can install this library with pip:

pip install sparse

You can also install from source from GitHub, either by pip installing directly:

pip install git+https://github.com/pydata/sparse

Or by cloning the repository and installing locally:

git clone https://github.com/pydata/sparse.git
cd sparse/
pip install .

Note that this library is under active development and so some API churn should be expected.

Getting Started

Install

If you haven’t already, install the sparse library

pip install sparse

Create

To start, lets construct a sparse COO array from a numpy.ndarray:

import numpy as np
import sparse

x = np.random.random((100, 100, 100))
x[x < 0.9] = 0  # fill most of the array with zeros

s = sparse.COO(x)  # convert to sparse array

These store the same information and support many of the same operations, but the sparse version takes up less space in memory

>>> x.nbytes
8000000
>>> s.nbytes
1102706
>>> s
<COO: shape=(100, 100, 100), dtype=float64, nnz=100246, fill_value=0.0>

For more efficient ways to construct sparse arrays, see documentation on Constructing Arrays.

Compute

Many of the normal Numpy operations work on COO objects just like on numpy.ndarray objects. This includes arithmetic, numpy.ufunc operations, or functions like tensordot and transpose.

>>> np.sin(s) + s.T * 1
<COO: shape=(100, 100, 100), dtype=float64, nnz=189601, fill_value=0.0>

However, operations which map zero elements to nonzero will usually change the fill-value instead of raising an error.

>>> y = s + 5
<COO: shape=(100, 100, 100), dtype=float64, nnz=100246, fill_value=5.0>

However, if you’re sure you want to convert a sparse array to a dense one, you can use the todense method (which will result in a numpy.ndarray):

y = s.todense() + 5

For more operations see the Operations documentation or the API reference.

Construct Sparse Arrays

From coordinates and data

You can construct COO arrays from coordinates and value data.

The coords parameter contains the indices where the data is nonzero, and the data parameter contains the data corresponding to those indices. For example, the following code will generate a \(5 \times 5\) diagonal matrix:

>>> import sparse

>>> coords = [[0, 1, 2, 3, 4],
...           [0, 1, 2, 3, 4]]
>>> data = [10, 20, 30, 40, 50]
>>> s = sparse.COO(coords, data, shape=(5, 5))

>>> s.todense()
array([[10,  0,  0,  0,  0],
       [ 0, 20,  0,  0,  0],
       [ 0,  0, 30,  0,  0],
       [ 0,  0,  0, 40,  0],
       [ 0,  0,  0,  0, 50]])

In general coords should be a (ndim, nnz) shaped array. Each row of coords contains one dimension of the desired sparse array, and each column contains the index corresponding to that nonzero element. data contains the nonzero elements of the array corresponding to the indices in coords. Its shape should be (nnz,).

If data is the same across all the coordinates, it can be passed in as a scalar. For example, the following produces the \(4 \times 4\) identity matrix:

>>> import sparse

>>> coords = [[0, 1, 2, 3],
...           [0, 1, 2, 3]]
>>> data = 1
>>> s = sparse.COO(coords, data, shape=(4, 4))

You can, and should, pass in numpy.ndarray objects for coords and data.

In this case, the shape of the resulting array was determined from the maximum index in each dimension. If the array extends beyond the maximum index in coords, you should supply a shape explicitly. For example, if we did the following without the shape keyword argument, it would result in a \(4 \times 5\) matrix, but maybe we wanted one that was actually \(5 \times 5\).

coords = [[0, 3, 2, 1], [4, 1, 2, 0]]
data = [1, 4, 2, 1]
s = COO(coords, data, shape=(5, 5))

COO arrays support arbitrary fill values. Fill values are the “default” value, or value to not store. This can be given a value other than zero. For example, the following builds a (bad) representation of a \(2 \times 2\) identity matrix. Note that not all operations are supported for operations with nonzero fill values.

coords = [[0, 1], [1, 0]]
data = [0, 0]
s = COO(coords, data, fill_value=1)

From Scipy sparse matrices

To construct COO array from spmatrix objects, you can use the COO.from_scipy_sparse method. As an example, if x is a scipy.sparse.spmatrix, you can do the following to get an equivalent COO array:

s = COO.from_scipy_sparse(x)

From Numpy arrays

To construct COO arrays from numpy.ndarray objects, you can use the COO.from_numpy method. As an example, if x is a numpy.ndarray, you can do the following to get an equivalent COO array:

s = COO.from_numpy(x)

Generating random COO objects

The sparse.random method can be used to create random COO arrays. For example, the following will generate a \(10 \times 10\) matrix with \(10\) nonzero entries, each in the interval \([0, 1)\).

s = sparse.random((10, 10), density=0.1)

Building COO Arrays from DOK Arrays

It’s possible to build COO arrays from DOK arrays, if it is not easy to construct the coords and data in a simple way. DOK arrays provide a simple builder interface to build COO arrays, but at this time, they can do little else.

You can get started by defining the shape (and optionally, datatype) of the DOK array. If you do not specify a dtype, it is inferred from the value dictionary or is set to dtype('float64') if that is not present.

s = DOK((6, 5, 2))
s2 = DOK((2, 3, 4), dtype=np.uint8)

After this, you can build the array by assigning arrays or scalars to elements or slices of the original array. Broadcasting rules are followed.

s[1:3, 3:1:-1] = [[6, 5]]

At the end, you can convert the DOK array to a COO array, and perform arithmetic or other operations on it.

s3 = COO(s)

In addition, it is possible to access single elements of the DOK array using normal Numpy indexing.

s[1, 2, 1]  # 5
s[5, 1, 1]  # 0

Converting COO objects to other Formats

COO arrays can be converted to Numpy arrays, or to some spmatrix subclasses via the following methods:

Operations on COO arrays

Operators

COO objects support a number of operations. They interact with scalars, Numpy arrays, other COO objects, and scipy.sparse.spmatrix objects, all following standard Python and Numpy conventions.

For example, the following Numpy expression produces equivalent results for both Numpy arrays, COO arrays, or a mix of the two:

np.log(X.dot(beta.T) + 1)

However some operations are not supported, like operations that implicitly cause dense structures, or numpy functions that are not yet implemented for sparse arrays.

np.svd(x)  # sparse svd not implemented

This page describes those valid operations, and their limitations.

elemwise

This function allows you to apply any arbitrary broadcasting function to any number of arguments where the arguments can be SparseArray objects or scipy.sparse.spmatrix objects. For example, the following will add two arrays:

sparse.elemwise(np.add, x, y)

Warning

Previously, elemwise was a method of the COO class. Now, it has been moved to the sparse module.

Auto-Densification

Operations that would result in dense matrices, such as operations with Numpy arrays raises a ValueError. For example, the following will raise a ValueError if x is a numpy.ndarray:

x + y

However, all of the following are valid operations.

x + 0
x != y
x + y
x == 5
5 * x
x / 7.3
x != 0
x == 0
~x
x + 5

We also support operations with a nonzero fill value. These are operations that map zero values to nonzero values, such as x + 1 or ~x. In these cases, they will produce an output with a fill value of 1 or True, assuming the original array has a fill value of 0 or False respectively.

If densification is needed, it must be explicit. In other words, you must call COO.todense on the COO object. If both operands are COO, both must be densified.

Operations with NumPy arrays

In certain situations, operations with NumPy arrays are also supported. For example, the following will work if x is COO and y is a NumPy array:

x * y

The following conditions must be met when performing element-wise operations with NumPy arrays:

  • The operation must produce a consistent fill-values. In other words, the resulting array must also be sparse.

  • Operating on the NumPy arrays must not increase the size when broadcasting the arrays.

Operations with scipy.sparse.spmatrix

Certain operations with scipy.sparse.spmatrix are also supported. For example, the following are all allowed if y is a scipy.sparse.spmatrix:

x + y
x - y
x * y
x > y
x < y

In general, if operating on a scipy.sparse.spmatrix is the same as operating on COO, as long as it is to the right of the operator.

Note

Results are not guaranteed if x is a scipy.sparse.spmatrix. For this reason, we recommend that all Scipy sparse matrices should be explicitly converted to COO before any operations.

Broadcasting

All binary operators support broadcasting. This means that (under certain conditions) you can perform binary operations on arrays with unequal shape. Namely, when the shape is missing a dimension, or when a dimension is 1. For example, performing a binary operation on two COO arrays with shapes (4,) and (5, 1) yields an object of shape (5, 4). The same happens with arrays of shape (1, 4) and (5, 1). However, (4, 1) and (5, 1) will raise a ValueError.

Element-wise Operations

COO arrays support a variety of element-wise operations. However, as with operators, operations that map zero to a nonzero value are not supported.

To illustrate, the following are all possible, and will produce another COO array:

np.abs(x)
np.sin(x)
np.sqrt(x)
np.conj(x)
np.expm1(x)
np.log1p(x)
np.exp(x)
np.cos(x)
np.log(x)

As above, in the last three cases, an array with a nonzero fill value will be produced.

Notice that you can apply any unary or binary numpy.ufunc to COO arrays, and numpy.ndarray objects and scalars and it will work so long as the result is not dense. When applying to numpy.ndarray objects, we check that operating on the array with zero would always produce a zero.

Reductions

COO objects support a number of reductions. However, not all important reductions are currently implemented (help welcome!) All of the following currently work:

x.sum(axis=1)
np.max(x)
np.min(x, axis=(0, 2))
x.prod()

Note

If you are performing multiple reductions along the same axes, it may be beneficial to call COO.enable_caching.

COO.reduce

This method can take an arbitrary numpy.ufunc and performs a reduction using that method. For example, the following will perform a sum:

x.reduce(np.add, axis=1)

Note

This library currently performs reductions by grouping together all coordinates along the supplied axes and reducing those. Then, if the number in a group is deficient, it reduces an extra time with zero. As a result, if reductions can change by adding multiple zeros to it, this method won’t be accurate. However, it works in most cases.

Partial List of Supported Reductions

Although any binary numpy.ufunc should work for reductions, when calling in the form x.reduction(), the following reductions are supported:

Indexing

COO arrays can be indexed just like regular numpy.ndarray objects. They support integer, slice and boolean indexing. However, currently, numpy advanced indexing is not properly supported. This means that all of the following work like in Numpy, except that they will produce COO arrays rather than numpy.ndarray objects, and will produce scalars where expected. Assume that z.shape is (5, 6, 7)

z[0]
z[1, 3]
z[1, 4, 3]
z[:3, :2, 3]
z[::-1, 1, 3]
z[-1]

All of the following will raise an IndexError, like in Numpy 1.13 and later.

z[6]
z[3, 6]
z[1, 4, 8]
z[-6]
Advanced Indexing

Advanced indexing (indexing arrays with other arrays) is supported, but only for indexing with a single array. Indexing a single array with multiple arrays is not supported at this time. As above, if z.shape is (5, 6, 7), all of the following will work like NumPy:

z[[0, 1, 2]]
z[1, [3]]
z[1, 4, [3, 6]]
z[:3, :2, [1, 5]]

Package Configuration

By default, when performing something like np.array(COO), we allow the array to be converted into a dense one. To prevent this and raise a RuntimeError instead, set the environment variable SPARSE_AUTO_DENSIFY to 0.

If it is desired to raise a warning if creating a sparse array that takes no less memory than an equivalent desne array, set the environment variable SPARSE_WARN_ON_TOO_DENSE to 1.

Other Operations

COO arrays support a number of other common operations. Among them are dot, tensordot, concatenate and stack, transpose and reshape. You can view the full list on the API reference page.

Note

Some operations require zero fill-values (such as nonzero) and others (such as concatenate) require that all inputs have consistent fill-values. For details, check the API reference.

API

Description

Classes

COO(coords[, data, shape, has_duplicates, …])

A sparse multidimensional array.

DOK(shape[, data, dtype, fill_value])

A class for building sparse multidimensional arrays.

SparseArray(shape[, fill_value])

An abstract base class for all the sparse array classes.

Functions

argwhere(a)

Find the indices of array elements that are non-zero, grouped by element.

as_coo(x[, shape, fill_value])

Converts any given format to COO.

concatenate(arrays[, axis, compressed_axes])

Concatenate the input arrays along the given dimension.

clip(a[, a_min, a_max, out])

Clip (limit) the values in the array.

diagonal(a[, offset, axis1, axis2])

Extract diagonal from a COO array.

diagonalize(a[, axis])

Diagonalize a COO array.

dot(a, b)

Perform the equivalent of numpy.dot on two arrays.

elemwise(func, *args, **kwargs)

Apply a function to any number of arguments.

eye(N[, M, k, dtype, format, compressed_axes])

Return a 2-D array in the specified format with ones on the diagonal and zeros elsewhere.

full(shape, fill_value[, dtype, format, …])

Return a SparseArray of given shape and type, filled with fill_value.

full_like(a, fill_value[, dtype, format, …])

Return a full array with the same shape and type as a given array.

isposinf(x[, out])

Test element-wise for positive infinity, return result as sparse bool array.

isneginf(x[, out])

Test element-wise for negative infinity, return result as sparse bool array.

kron(a, b)

Kronecker product of 2 sparse arrays.

load_npz(filename)

Load a sparse matrix in numpy’s .npz format from disk.

matmul(a, b)

Perform the equivalent of numpy.matmul on two arrays.

nanmax(x[, axis, keepdims, dtype, out])

Maximize along the given axes, skipping NaN values.

nanmean(x[, axis, keepdims, dtype, out])

Performs a NaN skipping mean operation along the given axes.

nanmin(x[, axis, keepdims, dtype, out])

Minimize along the given axes, skipping NaN values.

nanprod(x[, axis, keepdims, dtype, out])

Performs a product operation along the given axes, skipping NaN values.

nanreduce(x, method[, identity, axis, keepdims])

Performs an NaN skipping reduction on this array.

nansum(x[, axis, keepdims, dtype, out])

Performs a NaN skipping sum operation along the given axes.

ones(shape[, dtype, format, compressed_axes])

Return a SparseArray of given shape and type, filled with ones.

ones_like(a[, dtype, format, compressed_axes])

Return a SparseArray of ones with the same shape and type as a.

outer(a, b[, out])

Return outer product of two sparse arrays.

random(shape[, density, random_state, …])

Generate a random sparse multidimensional array

result_type(*arrays_and_dtypes)

Returns the type that results from applying the NumPy type promotion rules to the arguments.

roll(a, shift[, axis])

Shifts elements of an array along specified axis.

save_npz(filename, matrix[, compressed])

Save a sparse matrix to disk in numpy’s .npz format.

stack(arrays[, axis, compressed_axes])

Stack the input arrays along the given dimension.

tensordot(a, b[, axes])

Perform the equivalent of numpy.tensordot.

tril(x[, k])

Returns an array with all elements above the k-th diagonal set to zero.

triu(x[, k])

Returns an array with all elements below the k-th diagonal set to zero.

where(condition[, x, y])

Select values from either x or y depending on condition.

zeros(shape[, dtype, format, compressed_axes])

Return a SparseArray of given shape and type, filled with zeros.

zeros_like(a[, dtype, format, compressed_axes])

Return a SparseArray of zeros with the same shape and type as a.

Roadmap

For a brochure version of this roadmap, see this link.

Background

The aim of PyData/Sparse is to create sparse containers that implement the ndarray interface. Traditionally in the PyData ecosystem, sparse arrays have been provided by the scipy.sparse submodule. All containers there depend on and emulate the numpy.matrix interface. This means that they are limited to two dimensions and also don’t work well in places where numpy.ndarray would work.

PyData/Sparse is well on its way to replacing scipy.sparse as the de-facto sparse array implementation in the PyData ecosystem.

Topics

  • More storage formats (the most important being CSF, a generalisation of CSR/CSC).

  • Better performance/algorithms

  • Covering more of the NumPy API

  • SciPy Integration

  • Dask integration for high scalability

  • CuPy integration for GPU-acceleration

  • Maintenance and General Improvements

More Storage Formats

In the sparse domain, you have to make a choice of format when representing your array in memory, and different formats have different trade-offs. For example:

  • CSR/CSC are usually expected by external libraries, and have good space characteristics for most arrays

  • DOK allows in-place modification and writes

  • LIL has faster writes if written to in-order.

  • BSR allows block-writes and reads

The most important formats are, of course, CSR and CSC, because they allow zero-copy interaction with a number of libraries including MKL, LAPACK and others. This will allow PyData/Sparse to quickly reach the functionality of scipy.sparse, accelerating the path to its replacement.

Better Performance/Algorithms

There are a few places in scipy.sparse where algorithms are sub-optimal, sometimes due to reliance on NumPy which doesn’t have these algorithms. We intend to both improve the algorithms in NumPy, giving the broader community a chance to use them; as well as in PyData/Sparse, to reach optimal efficiency in the broadest use-cases.

Covering More of the NumPy API

Our eventual aim is to cover all areas of NumPy where algorithms exist that give sparse arrays an edge over dense arrays. Currently, PyData/Sparse supports reductions, element-wise functions and other common functions such as stacking, concatenating and tensor products. Common uses of sparse arrays include linear algebra and graph theoretic subroutines, so we plan on covering those first.

SciPy Integration

PyData/Sparse aims to build containers and elementary operations on them, such as element-wise operations, reductions and so on. We plan on modifying the current graph theoretic subroutines in scipy.sparse.csgraph to support PyData/Sparse arrays. The same applies for linear algebra and scipy.sparse.linalg.

Dask Integration for High Scalability

Dask is a project that takes ndarray style containers and then allows them to scale across multiple cores or clusters. We plan on tighter integration and cooperation with the Dask team to ensure the highest amount of Dask functionality works with sparse arrays.

CuPy integration for GPU-acceleration

CuPy is a project that implements a large portion of NumPy’s ndarray interface on GPUs. We plan to integrate with CuPy so that it’s possible to accelerate sparse arrays on GPUs.

Contributing

General Guidelines

sparse is a community-driven project on GitHub. You can find our repository on GitHub. Feel free to open issues for new features or bugs, or open a pull request to fix a bug or add a new feature.

If you haven’t contributed to open-source before, we recommend you read this excellent guide by GitHub on how to contribute to open source. The guide is long, so you can gloss over things you’re familiar with.

If you’re not already familiar with it, we follow the fork and pull model on GitHub.

Filing Issues

If you find a bug or would like a new feature, you might want to consider filing a new issue on GitHub. Before you open a new issue, please make sure of the following:

  • This should go without saying, but make sure what you are requesting is within the scope of this project.

  • The bug/feature is still present/missing on the master branch on GitHub.

  • A similar issue or pull request isn’t already open. If one already is, it’s better to contribute to the discussion there.

Contributing Code

This project has a number of requirements for all code contributed.

  • We use flake8 to automatically lint the code and maintain code style.

  • We use Numpy-style docstrings.

  • It’s ideal if user-facing API changes or new features have documentation added.

  • 100% code coverage is recommended for all new code in any submitted PR. Doctests count toward coverage.

  • Performance optimizations should have benchmarks added in benchmarks.

Setting up Your Development Environment

The following bash script is all you need to set up your development environment, after forking and cloning the repository:

pip install -e .[all]

Running/Adding Unit Tests

It is best if all new functionality and/or bug fixes have unit tests added with each use-case.

We use pytest as our unit testing framework, with the pytest-cov extension to check code coverage and pytest-flake8 to check code style. You don’t need to configure these extensions yourself. Once you’ve configured your environment, you can just cd to the root of your repository and run

pytest --pyargs sparse

This automatically checks code style and functionality, and prints code coverage, even though it doesn’t fail on low coverage.

Unit tests are automatically run on Travis CI for pull requests.

Coverage

The pytest script automatically reports coverage, both on the terminal for missing line numbers, and in annotated HTML form in htmlcov/index.html.

Coverage is automatically checked on CodeCov for pull requests.

Adding/Building the Documentation

If a feature is stable and relatively finalized, it is time to add it to the documentation. If you are adding any private/public functions, it is best to add docstrings, to aid in reviewing code and also for the API reference.

We use Numpy style docstrings and Sphinx to document this library. Sphinx, in turn, uses reStructuredText as its markup language for adding code.

We use the Sphinx Autosummary extension to generate API references. In particular, you may want do look at the docs/generated directory to see how these files look and where to add new functions, classes or modules. For example, if you add a new function to the sparse.COO class, you would open up docs/generated/sparse.COO.rst, and add in the name of the function where appropriate.

To build the documentation, you can cd into the docs directory and run

sphinx-build -W -b html . _build/html

After this, you can find an HTML version of the documentation in docs/_build/html/index.html.

Documentation for pull requests is automatically built on CircleCI and can be found in the build artifacts.

Adding and Running Benchmarks

We use Airspeed Velocity to run benchmarks. We have it set up to use conda, but you can edit the configuration locally if you so wish.

Changelog

0.10.0 / 2020-05-13

0.9.1 / 2020-01-23

0.8.0 / 2019-08-26

This release switches to Numba’s new typed lists, a lot of back-end work with the CI infrastructure, so Linux, macOS and Windows are officially tested. It also includes bug fixes.

It also adds in-progress, not yet public support for the GCXS format, which is a generalisation of CSR/CSC. (huge thanks to @daletovar)

0.7.0 / 2019-03-14

This is a release that adds compatibility with NumPy’s new __array_function__ protocol, for details refer to NEP-18.

The other big change is that we dropped compatibility with Python 2. Users on Python 2 should use version 0.6.0.

There are also some bug-fixes relating to fill-values.

This was mainly a contributor-driven release.

The full list of changes can be found below:

0.6.0 / 2018-12-19

This release breaks backward-compatibility. Previously, if arrays were fed into NumPy functions, an attempt would be made to densify the array and apply the NumPy function. This was unintended behaviour in most cases, with the array filling up memory before raising a MemoryError if the array was too large.

We have now changed this behaviour so that a RuntimeError is now raised if an attempt is made to automatically densify an array. To densify, use the explicit .todense() method.

0.5.0 / 2018-10-12

  • Added COO.real, COO.imag, and COO.conj (PR #196).

  • Added sparse.kron function (PR #194, PR #195).

  • Added order parameter to COO.reshape to make it work with np.reshape (PR #193).

  • Added COO.mean and sparse.nanmean (PR #190).

  • Added sparse.full and sparse.full_like (PR #189).

  • Added COO.clip method (PR #185).

  • Added COO.copy method, and changed pickle of COO to not include its cache (PR #184).

  • Added sparse.eye, sparse.zeros, sparse.zeros_like, sparse.ones, and sparse.ones_like (PR #183).

0.4.1 / 2018-09-12

  • Allow mixed ndarray-COO operations if the result is sparse (Issue #124, via PR #182).

  • Allow specifying a fill-value when converting from NumPy arrays (Issue #179, via PR #180).

  • Added COO.any and COO.all methods (PR #175).

  • Indexing for COO now accepts a single one-dimensional array index (PR #172).

  • The fill-value can now be something other than zero or False (PR #165).

  • Added a sparse.roll function (PR #160).

  • Numba code now releases the GIL. This leads to better multi-threaded performance in Dask (PR #159).

  • A number of bugs occurred, so to resolve them, COO.coords.dtype is always np.int64. COO, therefore, uses more memory than before (PR #158).

  • Add support for saving and loading COO files from disk (Issue #153, via PR #154).

  • Support COO.nonzero and np.argwhere (Issue #145, via PR #148).

  • Allow faux in-place operations (Issue #80, via PR #146).

  • COO is now always canonical (PR #141).

  • Improve indexing performance (PR #128).

  • Improve element-wise performance (PR #127).

  • Reductions now support a negative axis (Issue #117, via PR #118).

  • Match behaviour of ufunc.reduce from NumPy (Issue #107, via PR #108).

0.3.1 / 2018-04-12

0.3.0 / 2018-02-22

  • Add NaN-skipping aggregations (PR #102).

  • Add equivalent to np.where (PR #102).

  • N-input universal functions now work (PR #98).

  • Make dot more consistent with NumPy (PR #96).

  • Create a base class SparseArray (PR #92).

  • Minimum NumPy version is now 1.13 (PR #90).

  • Fix a bug where setting a DOK element to zero did nothing (Issue #93, via PR #94).

0.2.0 / 2018-01-25

  • Support faster np.array(COO) (PR #87).

  • Add DOK type (PR #85).

  • Fix sum for large arrays (Issue #82, via PR #83).

  • Support .size and .density (PR #69).

  • Documentation added for the package (PR #43).

  • Minimum required SciPy version is now 0.19 (PR #70).

  • len(COO) now works (PR #68).

  • scalar op COO now works for all operators (PR #67).

  • Validate axes for .transpose() (PR #61).

  • Extend indexing support (PR #57).

  • Add random function for generating random sparse arrays (PR #41).

  • COO(COO) now copies the original object (PR #55).

  • NumPy universal functions and reductions now work on COO arrays (PR #49).

  • Fix concatenate and stack for large arrays (Issue #32, via PR #51).

  • Fix nnz for scalars (Issue #47, via PR #48).

  • Support more operators and remove all special cases (PR #46).

  • Add support for triu and tril (PR #40).

  • Add support for Ellipsis (...) and None when indexing (PR #37).

  • Add support for bitwise bindary operations like & and | (PR #38).

  • Support broadcasting in element-wise operations (PR #35).

Contributor Covenant Code of Conduct

Our Pledge

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.

Our Standards

Examples of behavior that contributes to a positive environment for our community include:

  • Demonstrating empathy and kindness toward other people

  • Being respectful of differing opinions, viewpoints, and experiences

  • Giving and gracefully accepting constructive feedback

  • Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience

  • Focusing on what is best not just for us as individuals, but for the overall community

Examples of unacceptable behavior include:

  • The use of sexualized language or imagery, and sexual attention or advances of any kind

  • Trolling, insulting or derogatory comments, and personal or political attacks

  • Public or private harassment

  • Publishing others’ private information, such as a physical or email address, without their explicit permission

  • Other conduct which could reasonably be considered inappropriate in a professional setting

Enforcement Responsibilities

Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.

Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.

Scope

This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.

Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at habbasi@quansight.com. All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the reporter of any incident.

Enforcement Guidelines

Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:

1. Correction

Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.

Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.

2. Warning

Community Impact: A violation through a single incident or series of actions.

Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.

3. Temporary Ban

Community Impact: A serious violation of community standards, including sustained inappropriate behavior.

Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.

4. Permanent Ban

Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.

Consequence: A permanent ban from any sort of public interaction within the community.

Attribution

This Code of Conduct is adapted from the Contributor Covenant, version 2.0, available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.

Community Impact Guidelines were inspired by Mozilla’s code of conduct enforcement ladder.

For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.