Internals

Warning

The internals listed here are not considered stable API. They may change without notice.

UnifiedSparseGrids.ChebyshevLegendrePlanType

Plan for converting Chebyshev (T) coefficients ↔ Legendre (P) coefficients.

The plan acts on coefficient vectors of length n corresponding to degrees 0:(n-1).

  • lmul!(P, x) applies Chebyshev → Legendre.
  • ldiv!(P, x) applies Legendre → Chebyshev.
source
UnifiedSparseGrids.ChebyshevPlanType

Read-only Chebyshev–Gauss–Lobatto nodal ↔ modal transform plan of size n.

Storage contract:

  • input nodal values are in the nodal storage order (typically LevelOrder()),
  • output modal coefficients are in NaturalOrder().

lmul! applies nodal → modal; ldiv! applies modal → nodal.

source
UnifiedSparseGrids.EndpointMaskType
struct EndpointMask{M} <: AbstractEndpointMode

Two-bit endpoint mask used by nested node families. Bit 0 keeps the right endpoint and bit 1 keeps the left endpoint at level 0.

source
UnifiedSparseGrids.FourierPlanType

Read-only Fourier nodal ↔ modal transform plan of size n.

Storage contract:

  • input nodal values are in the nodal storage order (typically LevelOrder()),
  • output modal coefficients are in NaturalOrder().

lmul! applies nodal → modal; ldiv! applies modal → nodal.

source
UnifiedSparseGrids.LineEvalOpType
struct LineEvalOp{T,MatT<:AbstractMatrix{T}} <: AbstractLineOp

Rectangular evaluation matrix with zero padding back to the fiber length.

source
Base.iszeroMethod
Base.iszero(op::AbstractLineOp)

Return true when op is algebraically zero.

source
Base.lengthMethod

Length of the sparse grid coefficient vector (independent of layout).

source
LinearAlgebra.mul!Method

Apply a tensor-product operator between two possibly different sparse grids.

mul!(y, gridY, op, x, gridX; plan=nothing)

This computes

y := R_Y * A * R_X^T * x,

where A is a tensor-product operator applied on the covering grid gridW stored in plan (default: the larger of gridX and gridY).

op must be a square AbstractTensorOp compatible with gridW.

source
UnifiedSparseGrids._apply_lastdim_cycled!Method

Apply a line operator along the last storage dimension and cycle the layout.

This is the core primitive of the unidirectional sparse grid operator infrastructure.

op is a 1D line operator (or a composite of them) applied to each last-dimension fiber.

Output is written in the cycled recursive layout (last storage dim moved to the front), and dest.perm is set to cycle_last_to_front(src.perm).

source
UnifiedSparseGrids._apply_tensor_sweep!Method
_apply_tensor_sweep!(u, grid, op, plan, nworkers)

Apply one TensorOp sweep to u using cyclic rotations and last-dimension kernels.

nworkers is an internal worker budget used to suppress nested threading when a larger outer parallel region already exists.

source
UnifiedSparseGrids._apply_updown_term!Method
_apply_updown_term!(ybuf, xbuf, perm0, grid, op, plan, omit_dim, term_masks, pool)

Apply the UpDown split terms with a dynamic term queue and a striped reduction of worker-local accumulators.

source
UnifiedSparseGrids._build_subspace_blocksMethod

Build the subspace blocks of a sparse grid.

Returns (subspaces, offsets, extents, total_length, caps) where:

  • subspaces[b] is the refinement vector r of block b (one block per increment tensor block).
  • offsets[b] is the 1-based start index of that block in the concatenated layout.
  • extents[b] is the per-dimension Δ-size of the block (product equals block length).
  • total_length equals the sum of all block lengths.
  • caps is the per-dimension refinement-cap vector used to enumerate subspaces.

This is intentionally minimal so it can be reused by both SubspaceLayoutIterator and SubspaceBlockIterator without computing lookup tables.

source
UnifiedSparseGrids._cyclic_rotate_by!Method
_cyclic_rotate_by!(dest, src, grid, k, plan, nworkers)

Rotate src forward by k cyclic orientations and write the result into dest.

For k > 1, the rotation is evaluated as repeated identity last-dimension sweeps so that it reuses the same queueing and scratch model as ordinary tensor application.

source
UnifiedSparseGrids._cyclic_rotate_to!Method
_cyclic_rotate_to!(dest, src, grid, perm_src, perm_dst, plan, nworkers)

Rotate src from perm_src to perm_dst and write the result into dest. Both permutations must be cyclic orientations of the identity storage order.

source
UnifiedSparseGrids._piecewise_poly_zerosMethod

Select additional zero locations (beyond the two support endpoints) for the piecewise polynomial basis.

This is a simple, deterministic stencil: we add zeros at odd offsets ±3, ±5, … times h_ℓ around the center, skipping candidates outside [0,1], and using one-sided candidates near the boundary.

source
UnifiedSparseGrids._subsequence_index_mapMethod

Internal: build an index map from it_small into it_big.

it_small must generate an ordered subsequence of the coordinates produced by it_big. The matching coordinates in it_big are given by

target = coord_map(coords_small).

The returned map satisfies coords_big[map[i]] == coord_map(coords_small[i]).

source
UnifiedSparseGrids.apply_scatter!Method
apply_scatter!(destdata, rowptr, op, inp, outbuf, work, axis, r, plan)

Apply one line operator and scatter the result directly into the cycled destination layout.

source
UnifiedSparseGrids.apply_scatter_add!Method
apply_scatter_add!(destdata, rowptr, op, inp, outbuf, work, axis, r, plan)

Apply one line operator and add the scattered result into the cycled destination layout.

source
UnifiedSparseGrids.n_endpointsMethod
n_endpoints(::Type{<:EndpointMask})
n_endpoints(::EndpointMask)
n_endpoints(axis::AbstractUnivariateNodes)

Return the number of endpoints included at level 0.

source