Sparse¶
This implements sparse arrays of arbitrary dimension on top of numpy
and scipy.sparse
.
It generalizes the scipy.sparse.coo_matrix
and scipy.sparse.dok_matrix
layouts,
but extends beyond just rows and columns to an arbitrary number of dimensions.
Additionally, this project maintains compatibility with the numpy.ndarray
interface
rather than the numpy.matrix
interface used in scipy.sparse
These differences make this project useful in certain situations where scipy.sparse matrices are not well suited, but it should not be considered a full replacement. It lacks layouts that are not easily generalized like CSR/CSC and depends on scipy.sparse for some computations.
Motivation¶
Sparse arrays, or arrays that are mostly empty or filled with zeros, are common in many scientific applications. To save space we often avoid storing these arrays in traditional dense formats, and instead choose different data structures. Our choice of data structure can significantly affect our storage and computational costs when working with these arrays.
Design¶
The main data structure in this library follows the Coordinate List (COO) layout for sparse matrices, but extends it to multiple dimensions.
The COO layout, which stores the row index, column index, and value of every element:
row | col | data |
---|---|---|
0 | 0 | 10 |
0 | 2 | 13 |
1 | 3 | 9 |
3 | 8 | 21 |
It is straightforward to extend the COO layout to an arbitrary number of dimensions:
dim1 | dim2 | dim3 | … | data |
---|---|---|---|---|
0 | 0 | 0 | . | 10 |
0 | 0 | 3 | . | 13 |
0 | 2 | 2 | . | 9 |
3 | 1 | 4 | . | 21 |
This makes it easy to store a multidimensional sparse array, but we still need to reimplement all of the array operations like transpose, reshape, slicing, tensordot, reductions, etc., which can be challenging in general.
Fortunately in many cases we can leverage the existing scipy.sparse
algorithms if we can intelligently transpose and reshape our multi-dimensional
array into an appropriate 2-d sparse matrix, perform a modified sparse matrix
operation, and then reshape and transpose back. These reshape and transpose
operations can all be done at numpy speeds by modifying the arrays of
coordinates. After scipy.sparse runs its operations (often written in C) then
we can convert back to using the same path of reshapings and transpositions in
reverse.
LICENSE¶
This library is licensed under BSD-3
Install¶
You can install this library with pip
:
pip install sparse
You can also install from source from GitHub, either by pip installing directly:
pip install git+https://github.com/pydata/sparse
Or by cloning the repository and installing locally:
git clone https://github.com/pydata/sparse.git
cd sparse/
pip install .
Note that this library is under active development and so some API churn should be expected.
Getting Started¶
Create¶
To start, lets construct a sparse COO
array from a numpy.ndarray
:
import numpy as np
import sparse
x = np.random.random((100, 100, 100))
x[x < 0.9] = 0 # fill most of the array with zeros
s = sparse.COO(x) # convert to sparse array
These store the same information and support many of the same operations, but the sparse version takes up less space in memory
>>> x.nbytes
8000000
>>> s.nbytes
1102706
>>> s
<COO: shape=(100, 100, 100), dtype=float64, nnz=100246, fill_value=0.0>
For more efficient ways to construct sparse arrays, see documentation on Constructing Arrays.
Compute¶
Many of the normal Numpy operations work on COO
objects just like on numpy.ndarray
objects.
This includes arithmetic, numpy.ufunc operations, or functions like tensordot and transpose.
>>> np.sin(s) + s.T * 1
<COO: shape=(100, 100, 100), dtype=float64, nnz=189601, fill_value=0.0>
However, operations which map zero elements to nonzero will usually change the fill-value instead of raising an error.
>>> y = s + 5
<COO: shape=(100, 100, 100), dtype=float64, nnz=100246, fill_value=5.0>
However, if you’re sure you want to convert a sparse array to a dense one,
you can use the todense
method (which will result in a numpy.ndarray
):
y = s.todense() + 5
For more operations see the Operations documentation or the API reference.
Construct Sparse Arrays¶
From coordinates and data¶
You can construct COO
arrays from coordinates and value data.
The coords
parameter contains the indices where the data is nonzero,
and the data
parameter contains the data corresponding to those indices.
For example, the following code will generate a \(5 \times 5\) diagonal
matrix:
>>> import sparse
>>> coords = [[0, 1, 2, 3, 4],
... [0, 1, 2, 3, 4]]
>>> data = [10, 20, 30, 40, 50]
>>> s = sparse.COO(coords, data, shape=(5, 5))
>>> s.todense()
array([[10, 0, 0, 0, 0],
[ 0, 20, 0, 0, 0],
[ 0, 0, 30, 0, 0],
[ 0, 0, 0, 40, 0],
[ 0, 0, 0, 0, 50]])
In general coords
should be a (ndim, nnz)
shaped
array. Each row of coords
contains one dimension of the
desired sparse array, and each column contains the index
corresponding to that nonzero element. data
contains
the nonzero elements of the array corresponding to the indices
in coords
. Its shape should be (nnz,)
.
If data
is the same across all the coordinates, it can be passed
in as a scalar. For example, the following produces the \(4 \times 4\)
identity matrix:
>>> import sparse
>>> coords = [[0, 1, 2, 3],
... [0, 1, 2, 3]]
>>> data = 1
>>> s = sparse.COO(coords, data, shape=(4, 4))
You can, and should, pass in numpy.ndarray
objects for
coords
and data
.
In this case, the shape of the resulting array was determined from
the maximum index in each dimension. If the array extends beyond
the maximum index in coords
, you should supply a shape
explicitly. For example, if we did the following without the
shape
keyword argument, it would result in a
\(4 \times 5\) matrix, but maybe we wanted one that was actually
\(5 \times 5\).
coords = [[0, 3, 2, 1], [4, 1, 2, 0]]
data = [1, 4, 2, 1]
s = COO(coords, data, shape=(5, 5))
COO
arrays support arbitrary fill values. Fill values are the “default”
value, or value to not store. This can be given a value other than zero. For
example, the following builds a (bad) representation of a \(2 \times 2\)
identity matrix. Note that not all operations are supported for operations
with nonzero fill values.
coords = [[0, 1], [1, 0]]
data = [0, 0]
s = COO(coords, data, fill_value=1)
From Scipy sparse matrices¶
To construct COO
array from spmatrix
objects, you can use the COO.from_scipy_sparse
method. As an
example, if x
is a scipy.sparse.spmatrix
, you can
do the following to get an equivalent COO
array:
s = COO.from_scipy_sparse(x)
From Numpy arrays¶
To construct COO
arrays from numpy.ndarray
objects, you can use the COO.from_numpy
method. As an
example, if x
is a numpy.ndarray
, you can
do the following to get an equivalent COO
array:
s = COO.from_numpy(x)
Generating random COO
objects¶
The sparse.random
method can be used to create random
COO
arrays. For example, the following will generate
a \(10 \times 10\) matrix with \(10\) nonzero entries,
each in the interval \([0, 1)\).
s = sparse.random((10, 10), density=0.1)
Building COO
Arrays from DOK
Arrays¶
It’s possible to build COO
arrays from DOK
arrays, if it is not
easy to construct the coords
and data
in a simple way. DOK
arrays provide a simple builder interface to build COO
arrays, but at
this time, they can do little else.
You can get started by defining the shape (and optionally, datatype) of the
DOK
array. If you do not specify a dtype, it is inferred from the value
dictionary or is set to dtype('float64')
if that is not present.
s = DOK((6, 5, 2))
s2 = DOK((2, 3, 4), dtype=np.uint8)
After this, you can build the array by assigning arrays or scalars to elements or slices of the original array. Broadcasting rules are followed.
s[1:3, 3:1:-1] = [[6, 5]]
At the end, you can convert the DOK
array to a COO
array, and
perform arithmetic or other operations on it.
s3 = COO(s)
In addition, it is possible to access single elements of the DOK
array
using normal Numpy indexing.
s[1, 2, 1] # 5
s[5, 1, 1] # 0
Converting COO
objects to other Formats¶
COO
arrays can be converted to Numpy arrays,
or to some spmatrix
subclasses via the following
methods:
COO.todense
: Converts to anumpy.ndarray
unconditionally.COO.maybe_densify
: Converts to anumpy.ndarray
based on- certain constraints.
COO.to_scipy_sparse
: Converts to ascipy.sparse.coo_matrix
if- the array is two dimensional.
COO.tocsr
: Converts to ascipy.sparse.csr_matrix
if- the array is two dimensional.
COO.tocsc
: Converts to ascipy.sparse.csc_matrix
if- the array is two dimensional.
Operations on COO
arrays¶
Operators¶
COO
objects support a number of operations. They interact with scalars,
Numpy arrays, other COO
objects, and
scipy.sparse.spmatrix
objects, all following standard Python and Numpy
conventions.
For example, the following Numpy expression produces equivalent results for both Numpy arrays, COO arrays, or a mix of the two:
np.log(X.dot(beta.T) + 1)
However some operations are not supported, like operations that implicitly cause dense structures, or numpy functions that are not yet implemented for sparse arrays.
np.svd(x) # sparse svd not implemented
This page describes those valid operations, and their limitations.
elemwise
¶
This function allows you to apply any arbitrary broadcasting function to any number of arguments
where the arguments can be SparseArray
objects or scipy.sparse.spmatrix
objects.
For example, the following will add two arrays:
sparse.elemwise(np.add, x, y)
Auto-Densification¶
Operations that would result in dense matrices, such as
operations with Numpy arrays
raises a ValueError
. For example, the following will raise a
ValueError
if x
is a numpy.ndarray
:
x + y
However, all of the following are valid operations.
x + 0
x != y
x + y
x == 5
5 * x
x / 7.3
x != 0
x == 0
~x
x + 5
We also support operations with a nonzero fill value. These are operations
that map zero values to nonzero values, such as x + 1
or ~x
.
In these cases, they will produce an output with a fill value of 1
or True
,
assuming the original array has a fill value of 0
or False
respectively.
If densification is needed, it must be explicit. In other words, you must call
COO.todense
on the COO
object. If both operands are COO
,
both must be densified.
Operations with NumPy arrays¶
In certain situations, operations with NumPy arrays are also supported. For example,
the following will work if x
is COO
and y
is a NumPy array:
x * y
The following conditions must be met when performing element-wise operations with NumPy arrays:
- The operation must produce a consistent fill-values. In other words, the resulting array must also be sparse.
- Operating on the NumPy arrays must not increase the size when broadcasting the arrays.
Operations with scipy.sparse.spmatrix
¶
Certain operations with scipy.sparse.spmatrix
are also supported.
For example, the following are all allowed if y
is a scipy.sparse.spmatrix
:
x + y
x - y
x * y
x > y
x < y
In general, if operating on a scipy.sparse.spmatrix
is the same as operating
on COO
, as long as it is to the right of the operator.
Note
Results are not guaranteed if x
is a scipy.sparse.spmatrix
.
For this reason, we recommend that all Scipy sparse matrices should be explicitly
converted to COO
before any operations.
Broadcasting¶
All binary operators support broadcasting.
This means that (under certain conditions) you can perform binary operations
on arrays with unequal shape. Namely, when the shape is missing a dimension,
or when a dimension is 1
. For example, performing a binary operation
on two COO
arrays with shapes (4,)
and (5, 1)
yields
an object of shape (5, 4)
. The same happens with arrays of shape
(1, 4)
and (5, 1)
. However, (4, 1)
and (5, 1)
will raise a ValueError
.
Element-wise Operations¶
COO
arrays support a variety of element-wise operations. However, as
with operators, operations that map zero to a nonzero value are not supported.
To illustrate, the following are all possible, and will produce another
COO
array:
np.abs(x)
np.sin(x)
np.sqrt(x)
np.conj(x)
np.expm1(x)
np.log1p(x)
np.exp(x)
np.cos(x)
np.log(x)
As above, in the last three cases, an array with a nonzero fill value will be produced.
Notice that you can apply any unary or binary numpy.ufunc to COO
arrays, and numpy.ndarray
objects and scalars and it will work so
long as the result is not dense. When applying to numpy.ndarray
objects,
we check that operating on the array with zero would always produce a zero.
Reductions¶
COO
objects support a number of reductions. However, not all important
reductions are currently implemented (help welcome!) All of the following
currently work:
x.sum(axis=1)
np.max(x)
np.min(x, axis=(0, 2))
x.prod()
Note
If you are performing multiple reductions along the same axes, it may
be beneficial to call COO.enable_caching
.
COO.reduce
¶
This method can take an arbitrary numpy.ufunc and performs a reduction using that method. For example, the following will perform a sum:
x.reduce(np.add, axis=1)
Note
This library currently performs reductions by grouping together all coordinates along the supplied axes and reducing those. Then, if the number in a group is deficient, it reduces an extra time with zero. As a result, if reductions can change by adding multiple zeros to it, this method won’t be accurate. However, it works in most cases.
Indexing¶
COO
arrays can be indexed
just like regular
numpy.ndarray
objects. They support integer, slice and boolean indexing.
However, currently, numpy advanced indexing is not properly supported. This
means that all of the following work like in Numpy, except that they will produce
COO
arrays rather than numpy.ndarray
objects, and will produce
scalars where expected. Assume that z.shape
is (5, 6, 7)
z[0]
z[1, 3]
z[1, 4, 3]
z[:3, :2, 3]
z[::-1, 1, 3]
z[-1]
All of the following will raise an IndexError
, like in Numpy 1.13 and later.
z[6]
z[3, 6]
z[1, 4, 8]
z[-6]
Advanced Indexing¶
Advanced indexing (indexing arrays with other arrays) is supported, but only for indexing
with a single array. Indexing a single array with multiple arrays is not supported at
this time. As above, if z.shape
is (5, 6, 7)
, all of the following will
work like NumPy:
z[[0, 1, 2]]
z[1, [3]]
z[1, 4, [3, 6]]
z[:3, :2, [1, 5]]
Other Operations¶
COO
arrays support a number of other common operations. Among them are
dot
, tensordot
, concatenate
and stack
, transpose
and reshape
.
You can view the full list on the API reference page.
Note
Some operations require zero fill-values (such as nonzero
)
and others (such as concatenate
) require that all inputs have consistent fill-values.
For details, check the API reference.
API¶
Description
Classes
COO (coords[, data, shape, has_duplicates, …]) |
A sparse multidimensional array. |
DOK (shape[, data, dtype, fill_value]) |
A class for building sparse multidimensional arrays. |
SparseArray (shape[, fill_value]) |
An abstract base class for all the sparse array classes. |
Functions
as_coo (x[, shape, fill_value]) |
Converts any given format to COO . |
concatenate (arrays[, axis]) |
Concatenate the input arrays along the given dimension. |
dot (a, b) |
Perform the equivalent of numpy.dot on two arrays. |
elemwise (func, *args, **kwargs) |
Apply a function to any number of arguments. |
load_npz (filename) |
Load a sparse matrix in numpy’s .npz format from disk. |
nanmax (x[, axis, keepdims, dtype, out]) |
Maximize along the given axes, skipping NaN values. |
nanmin (x[, axis, keepdims, dtype, out]) |
Minimize along the given axes, skipping NaN values. |
nanprod (x[, axis, keepdims, dtype, out]) |
Performs a product operation along the given axes, skipping NaN values. |
nanreduce (x, method[, identity, axis, keepdims]) |
Performs an NaN skipping reduction on this array. |
nansum (x[, axis, keepdims, dtype, out]) |
Performs a NaN skipping sum operation along the given axes. |
random (shape[, density, random_state, …]) |
Generate a random sparse multidimensional array |
roll (a, shift[, axis]) |
Shifts elements of an array along specified axis. |
save_npz (filename, matrix[, compressed]) |
Save a sparse matrix to disk in numpy’s .npz format. |
stack (arrays[, axis]) |
Stack the input arrays along the given dimension. |
tensordot (a, b[, axes]) |
Perform the equivalent of numpy.tensordot . |
tril (x[, k]) |
Returns an array with all elements above the k-th diagonal set to zero. |
triu (x[, k]) |
Returns an array with all elements below the k-th diagonal set to zero. |
where (condition[, x, y]) |
Select values from either x or y depending on condition . |
Contributing¶
General Guidelines¶
sparse is a community-driven project on GitHub. You can find our repository on GitHub. Feel free to open issues for new features or bugs, or open a pull request to fix a bug or add a new feature.
If you haven’t contributed to open-source before, we recommend you read this excellent guide by GitHub on how to contribute to open source. The guide is long, so you can gloss over things you’re familiar with.
If you’re not already familiar with it, we follow the fork and pull model on GitHub.
Filing Issues¶
If you find a bug or would like a new feature, you might want to consider filing a new issue on GitHub. Before you open a new issue, please make sure of the following:
- This should go without saying, but make sure what you are requesting is within the scope of this project.
- The bug/feature is still present/missing on the
master
branch on GitHub. - A similar issue or pull request isn’t already open. If one already is, it’s better to contribute to the discussion there.
Contributing Code¶
This project has a number of requirements for all code contributed.
- We use
flake8
to automatically lint the code and maintain code style. - We use Numpy-style docstrings.
- It’s ideal if user-facing API changes or new features have documentation added.
- 100% code coverage is recommended for all new code in any submitted PR. Doctests count toward coverage.
- Performance optimizations should have benchmarks added in
benchmarks
.
Setting up Your Development Environment¶
The following bash script is all you need to set up your development environment, after forking and cloning the repository:
pip install -e .[all]
Running/Adding Unit Tests¶
It is best if all new functionality and/or bug fixes have unit tests added with each use-case.
Since we support both Python 2.7 and Python 3.5 and newer, it is recommended
to test with at least these two versions before committing your code or opening
a pull request. We use pytest as our unit
testing framework, with the pytest-cov
extension to check code coverage and
pytest-flake8
to check code style. You don’t need to configure these extensions
yourself. Once you’ve configured your environment, you can just cd
to
the root of your repository and run
py.test
This automatically checks code style and functionality, and prints code coverage, even though it doesn’t fail on low coverage.
Unit tests are automatically run on Travis CI for pull requests.
Coverage¶
The py.test
script automatically reports coverage, both on the terminal for
missing line numbers, and in annotated HTML form in htmlcov/index.html
.
Coverage is automatically checked on CodeCov for pull requests.
Adding/Building the Documentation¶
If a feature is stable and relatively finalized, it is time to add it to the documentation. If you are adding any private/public functions, it is best to add docstrings, to aid in reviewing code and also for the API reference.
We use Numpy style docstrings and Sphinx to document this library. Sphinx, in turn, uses reStructuredText as its markup language for adding code.
We use the Sphinx Autosummary extension
to generate API references. In particular, you may want do look at the docs/generated
directory to see how these files look and where to add new functions, classes or modules.
For example, if you add a new function to the sparse.COO
class, you would open up
docs/generated/sparse.COO.rst
, and add in the name of the function where appropriate.
To build the documentation, you can cd
into the docs
directory
and run
sphinx-build -W -b html . _build/html
After this, you can find an HTML version of the documentation in docs/_build/html/index.html
.
Documentation for pull requests is automatically built on CircleCI and can be found in the build artifacts.
Adding and Running Benchmarks¶
We use Airspeed Velocity to run benchmarks. We have it set
up to use conda
, but you can edit the configuration locally if you so wish.
Changelog¶
0.4.1 2018-09-12
- [Feature] #117: (via #118) Reductions now support a negative axis.
- [Feature] #127: Improve element-wise performance
- [Feature] #128: Improve indexing performance
- [Feature] #80: (via #146) Allow faux in-place operations
- [Feature] #145: (via #148) Support
COO.nonzero
andnp.argwhere
- [Feature] #153: (via #154) Add support for saving and loading
COO
files from disk - [Feature] #159: Numba code now releases the GIL. This leads to better multi-threaded performance in Dask.
- [Feature] #160: Added a
sparse.roll
function. - [Feature] #165: The fill-value can now be something other than zero or
False
. - [Feature] #172: Indexing for
COO
now accepts a single one-dimensional array index. - [Feature] #175: Added
COO.any
andCOO.all
methods. - [Feature] #179: (via #180) Allow specifying a fill-value when converting from NumPy arrays.
- [Feature] #124: (via #182) Allow mixed
ndarray
-COO
operations if the result is sparse. - [Support] #141:
COO
is now always canonical
0.3.0 2018-02-22
0.2.0 2018-01-25
- [Feature] #35: Support broadcasting in element-wise operations
- [Feature] #38: Add support for bitwise bindary operations like
&
and|
- [Feature] #37: Add support for Ellipsis (
...
) andNone
when indexing - [Feature] #40: Add support for
triu
andtril
- [Feature] #46: Support more operators and remove all special cases
- [Feature] #49: NumPy universal functions and reductions now work on
COO
arrays - [Feature] #55:
COO(COO)
now copies the original object - [Feature] #41: Add
random
function for generating random sparse arrays - [Feature] #57: Extend indexing support
- [Feature] #67:
scalar op COO
now works for all operators - [Feature] #68:
len(COO)
now works - [Feature] #69: Support
.size
and.density
- [Feature] #85: Add
DOK
type - [Feature] #87: Support faster
np.array(COO)
- [Bug] #47: (via #48) Fix
nnz
for scalars - [Bug] #32: (via #51) Fix concatenate and stack for large arrays
- [Bug] #61: Validate axes for
.transpose()
- [Bug] #82: (via #83) Fix sum for large arrays
- [Support] #70: Minimum required SciPy version is now 0.19
- [Support] #43: Documentation added for the package