vibespatial

Submodules

Attributes

Exceptions

VibeTraceWarning

Base class for warnings generated by user code.

StrictNativeFallbackError

Unspecified run-time error.

StrictNativeMaterializationError

Unspecified run-time error.

Classes

GeoDataFrame

A GeoDataFrame object is a pandas.DataFrame that has one or more columns

GeoSeries

A Series object designed to store shapely geometry objects.

RectClipBenchmark

RectClipResult

Result of a rectangle clip operation.

GPURepairResult

Result of GPU make_valid repair.

MakeValidBenchmark

MakeValidPlan

MakeValidPrimitive

Enum where members are also (and must be) strings

MakeValidResult

MakeValidStage

BufferKernelResult

Result of a buffer kernel invocation.

OffsetCurveKernelResult

StrokeBenchmark

StrokeKernelPlan

StrokeKernelStage

StrokeOperation

Enum where members are also (and must be) strings

StrokePrimitive

Enum where members are also (and must be) strings

BufferKind

Enum where members are also (and must be) strings

BufferSpec

GeometryBufferSchema

GeometryFamily

Enum where members are also (and must be) strings

BufferSharingMode

Enum where members are also (and must be) strings

DiagnosticEvent

DiagnosticKind

Enum where members are also (and must be) strings

FamilyGeometryBuffer

GeoArrowBufferView

MixedGeoArrowView

OwnedGeometryArray

Columnar geometry storage with optional device-resident metadata.

GeoArrowBridgeBenchmark

GeoArrowCodecPlan

GeoParquetChunkPlan

GeoParquetEngineBenchmark

GeoParquetEnginePlan

GeoParquetScanPlan

NativeGeometryBenchmark

WKBBridgeBenchmark

WKBBridgePlan

ShapefileIngestBenchmark

ShapefileIngestPlan

ShapefileOwnedBatch

VectorFilePlan

GeoJSONIngestBenchmark

GeoJSONIngestPlan

GeoJSONOwnedBatch

GeoParquetMetadataSummary

GeoParquetPlannerBenchmark

GeoParquetPruneResult

IOFormat

Enum where members are also (and must be) strings

IOOperation

Enum where members are also (and must be) strings

IOPathKind

Enum where members are also (and must be) strings

IOPlan

IOSupportEntry

BinaryPredicateResult

NullBehavior

Enum where members are also (and must be) strings

ExecutionMode

Enum where members are also (and must be) strings

RuntimeSelection

AdaptivePlan

DeviceSnapshot

MonitoringBackend

Enum where members are also (and must be) strings

MonitoringSample

WorkloadProfile

CrossoverPolicy

Per-kernel crossover thresholds for AUTO dispatch.

DispatchDecision

Enum where members are also (and must be) strings

DispatchEvent

ExecutionTraceContext

FallbackEvent

FusionPlan

FusionStage

IntermediateDisposition

Enum where members are also (and must be) strings

PipelineStep

StepKind

Enum where members are also (and must be) strings

MaterializationBoundary

Enum where members are also (and must be) strings

MaterializationEvent

GeometryPresence

Enum where members are also (and must be) strings

GeometrySemantics

CompensationMode

Enum where members are also (and must be) strings

CoordinateStats

DevicePrecisionProfile

KernelClass

Enum where members are also (and must be) strings

PrecisionMode

Enum where members are also (and must be) strings

PrecisionPlan

RefinementMode

Enum where members are also (and must be) strings

Residency

Enum where members are also (and must be) strings

ResidencyPlan

TransferTrigger

Enum where members are also (and must be) strings

PredicateFallback

Enum where members are also (and must be) strings

RobustnessGuarantee

Enum where members are also (and must be) strings

RobustnessPlan

TopologyPolicy

Enum where members are also (and must be) strings

BoundsPairBenchmark

CandidatePairs

MBR candidate pair result with optional device-resident arrays.

FlatSpatialIndex

SegmentCandidatePairs

Segment candidate pairs with lazy device-to-host materialization.

SegmentFilterBenchmark

SegmentMBRTable

Segment MBR table with optional device-resident arrays.

SegmentIntersectionBenchmark

SegmentIntersectionCandidates

SegmentIntersectionKind

Enum where members are also (and must be) ints

SegmentIntersectionResult

Segment intersection results with lazy host materialization.

SegmentLocalEventSummary

Per-row exact local-event summary derived from segment intersections.

SegmentTable

Functions

list_layers(→ pandas.DataFrame)

List layers available in a file.

points_from_xy(→ GeometryArray)

Generate GeometryArray of shapely Point geometries from x, y(, z) coordinates.

read_feather(path[, columns, to_pandas_kwargs])

Load a Feather object from the file path, returning a GeoDataFrame.

read_file(filename[, bbox, mask, columns, rows, ...])

Read a spatial file into a GeoDataFrame.

read_parquet(path, *[, columns, storage_options, ...])

Read a GeoParquet file into a GeoDataFrame.

sjoin_nearest(→ vibespatial.api.GeoDataFrame)

Spatial join of two GeoDataFrames based on the distance between their geometries.

benchmark_clip_by_rect(→ RectClipBenchmark)

clip_by_rect_owned(→ RectClipResult)

evaluate_geopandas_clip_by_rect(...)

gpu_repair_invalid_polygons(→ GPURepairResult | None)

GPU-resident batch repair of invalid polygon geometries (Phase 16).

benchmark_make_valid(values, *[, method, ...])

evaluate_geopandas_make_valid(→ MakeValidResult)

Run make_valid and return the full MakeValidResult.

fusion_plan_for_make_valid(*[, method, keep_collapsed])

make_valid_owned(→ MakeValidResult)

Validate and repair geometries using compact-invalid-row pattern (ADR-0019).

plan_make_valid_pipeline(→ MakeValidPlan)

benchmark_offset_curve(→ StrokeBenchmark)

benchmark_point_buffer(→ StrokeBenchmark)

evaluate_geopandas_buffer(values, distance, *, ...[, ...])

evaluate_geopandas_offset_curve(values, distance, *, ...)

fusion_plan_for_stroke(operation)

offset_curve_owned(→ OffsetCurveKernelResult)

plan_stroke_kernel(→ StrokeKernelPlan)

point_buffer_owned(→ BufferKernelResult)

get_geometry_buffer_schema(→ GeometryBufferSchema)

from_geoarrow(→ OwnedGeometryArray)

from_shapely_geometries(→ OwnedGeometryArray)

from_wkb(→ OwnedGeometryArray)

benchmark_geoarrow_bridge(→ list[GeoArrowBridgeBenchmark])

benchmark_geoparquet_scan_engine(...)

benchmark_native_geometry_codec(...)

benchmark_wkb_bridge(→ list[WKBBridgeBenchmark])

decode_owned_geoarrow(...)

decode_wkb_owned(...)

encode_owned_geoarrow(...)

encode_owned_geoarrow_array(array, *[, field_name, ...])

encode_wkb_owned(→ list[bytes | str | None])

geodataframe_from_arrow(table, *[, geometry, ...])

geodataframe_to_arrow(df, *[, index, ...])

geoseries_from_arrow(arr, **kwargs)

geoseries_from_owned(array, *[, name, crs, ...])

geoseries_to_arrow(series, *[, geometry_encoding, ...])

has_pyarrow_support(→ bool)

has_pylibcudf_support(→ bool)

plan_geoarrow_codec(→ GeoArrowCodecPlan)

plan_geoparquet_engine(→ GeoParquetEnginePlan)

plan_geoparquet_scan(→ GeoParquetScanPlan)

plan_wkb_bridge(→ WKBBridgePlan)

plan_wkb_partition(→ WKBPartitionPlan)

read_geoparquet(path, *[, columns, storage_options, ...])

Read a GeoParquet file into a GeoDataFrame.

read_geoparquet_native(...)

Read a GeoParquet file into the shared native tabular result boundary.

read_geoparquet_owned(...)

write_geoparquet(→ None)

benchmark_shapefile_ingest(...)

plan_shapefile_ingest(→ ShapefileIngestPlan)

plan_vector_file_io(→ VectorFilePlan)

read_geojson_native(source, *[, prefer, objective, ...])

read_shapefile_native(source, *[, bbox, columns, ...])

read_shapefile_owned(→ ShapefileOwnedBatch)

read_vector_file(filename[, bbox, mask, columns, ...])

Read a spatial file into a GeoDataFrame.

read_vector_file_native(filename[, bbox, mask, ...])

Read a spatial file into the shared native tabular boundary.

write_vector_file(df, filename[, driver, schema, index])

benchmark_geojson_ingest(→ list[GeoJSONIngestBenchmark])

plan_geojson_ingest(→ GeoJSONIngestPlan)

read_geojson_owned(→ GeoJSONOwnedBatch)

benchmark_geoparquet_planner(...)

build_geoparquet_metadata_summary(...)

select_row_groups(→ GeoParquetPruneResult)

plan_io_support(→ IOPlan)

compute_geometry_bounds(geometry_array, *[, ...])

compute_morton_keys(geometry_array, *[, ...])

compute_offset_spans(...)

compute_total_bounds(→ tuple[float, float, float, float])

benchmark_binary_predicate(→ dict[str, int])

evaluate_binary_predicate(→ BinaryPredicateResult)

evaluate_geopandas_binary_predicate(→ numpy.ndarray | None)

supports_binary_predicate(→ bool)

get_requested_mode(→ ExecutionMode)

Return the session-wide requested execution mode.

has_gpu_runtime(→ bool)

select_runtime(→ RuntimeSelection)

set_execution_mode(→ None)

Override the session execution mode. Pass None to clear.

capture_device_snapshot(→ DeviceSnapshot)

get_cached_snapshot(→ DeviceSnapshot)

Return a session-scoped DeviceSnapshot, creating it on first call.

invalidate_snapshot_cache(→ None)

Clear the cached snapshot so the next call to get_cached_snapshot() re-probes.

plan_adaptive_execution(→ AdaptivePlan)

plan_dispatch_selection(, mixed_geometry, ...)

Plan dispatch while preserving compatibility with RuntimeSelection-style access.

plan_kernel_dispatch(, mixed_geometry, ...)

Plan kernel dispatch with a cached device snapshot.

default_crossover_policy(→ CrossoverPolicy)

select_dispatch_for_rows(→ DispatchDecision)

Select CPU or GPU execution based on row count and crossover policy.

clear_dispatch_events(→ None)

get_dispatch_events(→ list[DispatchEvent])

record_dispatch_event(→ DispatchEvent)

execution_trace(pipeline)

get_active_trace(→ ExecutionTraceContext | None)

clear_fallback_events(→ None)

get_fallback_events(→ list[FallbackEvent])

record_fallback_event(→ FallbackEvent)

strict_native_mode_enabled(→ bool)

plan_fusion(→ FusionPlan)

clear_materialization_events(→ None)

get_materialization_events(→ list[MaterializationEvent])

record_materialization_event(→ MaterializationEvent)

classify_geometry(→ GeometrySemantics)

is_null_like(→ bool)

measurement_result_for_geometry(→ float | tuple[float, ...)

predicate_result_for_pair(→ bool | None)

unary_result_for_missing_input(→ None)

normalize_precision_mode(→ PrecisionMode)

select_precision_plan(→ PrecisionPlan)

select_residency_plan(→ ResidencyPlan)

select_robustness_plan(→ RobustnessPlan)

benchmark_bounds_pairs(→ BoundsPairBenchmark)

benchmark_segment_filter(→ SegmentFilterBenchmark)

build_flat_spatial_index(→ FlatSpatialIndex)

extract_segment_mbrs(→ SegmentMBRTable)

Extract per-segment MBRs from all line/polygon geometries.

generate_bounds_pairs(→ CandidatePairs)

generate_segment_mbr_pairs(→ SegmentCandidatePairs)

Generate candidate segment pairs by MBR overlap filtering.

benchmark_segment_intersections(...)

classify_segment_intersections(→ SegmentIntersectionResult)

Classify all segment-segment intersections between two geometry arrays.

extract_segments(→ SegmentTable)

Extract segments from geometry array on CPU (legacy path).

generate_segment_candidates(...)

summarize_exact_local_events(→ SegmentLocalEventSummary)

Summarize per-row exact local-event counts for overlay-style workloads.

Package Contents

class vibespatial.GeoDataFrame(data=None, *args, geometry: Any | None = None, crs: Any | None = None, **kwargs)

A GeoDataFrame object is a pandas.DataFrame that has one or more columns containing geometry.

In addition to the standard DataFrame constructor arguments, GeoDataFrame also accepts the following keyword arguments:

Parameters

crsvalue (optional)

Coordinate Reference System of the geometry objects. Can be anything accepted by pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.

geometrystr or array-like (optional)

Value to use as the active geometry column. If str, treated as column name to use. If array-like, it will be added as new column named ‘geometry’ on the GeoDataFrame and set as the active geometry column.

Note that if geometry is a (Geo)Series with a name, the name will not be used, a column named “geometry” will still be added. To preserve the name, you can use rename_geometry() to update the geometry column name.

Examples

Constructing GeoDataFrame from a dictionary.

>>> from shapely.geometry import Point
>>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]}
>>> gdf = geopandas.GeoDataFrame(d, crs="EPSG:4326")
>>> gdf
    col1     geometry
0  name1  POINT (1 2)
1  name2  POINT (2 1)

Notice that the inferred dtype of ‘geometry’ columns is geometry.

>>> gdf.dtypes
col1             str
geometry    geometry
dtype: object

Constructing GeoDataFrame from a pandas DataFrame with a column of WKT geometries:

>>> import pandas as pd
>>> d = {'col1': ['name1', 'name2'], 'wkt': ['POINT (1 2)', 'POINT (2 1)']}
>>> df = pd.DataFrame(d)
>>> gs = geopandas.GeoSeries.from_wkt(df['wkt'])
>>> gdf = geopandas.GeoDataFrame(df, geometry=gs, crs="EPSG:4326")
>>> gdf
    col1          wkt     geometry
0  name1  POINT (1 2)  POINT (1 2)
1  name2  POINT (2 1)  POINT (2 1)

See Also

GeoSeries : Series object designed to store shapely geometry objects

geometry
set_geometry(col, drop: bool | None = ..., inplace: Literal[True] = ..., crs: Any | None = ...) None
set_geometry(col, drop: bool | None = ..., inplace: Literal[False] = ..., crs: Any | None = ...) GeoDataFrame

Set the GeoDataFrame geometry using either an existing column or the specified input. By default yields a new object.

The original geometry column is replaced with the input.

Parameters

colcolumn label or array-like

An existing column name or values to set as the new geometry column. If values (array-like, (Geo)Series) are passed, then if they are named (Series) the new geometry column will have the corresponding name, otherwise the existing geometry column will be replaced. If there is no existing geometry column, the new geometry column will use the default name “geometry”.

dropboolean, default False

When specifying a named Series or an existing column name for col, controls if the previous geometry column should be dropped from the result. The default of False keeps both the old and new geometry column.

Deprecated since version 1.0.0.

inplaceboolean, default False

Modify the GeoDataFrame in place (do not create a new object)

crspyproj.CRS, optional

Coordinate system to use. The value can be anything accepted by pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string. If passed, overrides both DataFrame and col’s crs. Otherwise, tries to get crs from passed col values or DataFrame.

Examples

>>> from shapely.geometry import Point
>>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]}
>>> gdf = geopandas.GeoDataFrame(d, crs="EPSG:4326")
>>> gdf
    col1     geometry
0  name1  POINT (1 2)
1  name2  POINT (2 1)

Passing an array:

>>> df1 = gdf.set_geometry([Point(0,0), Point(1,1)])
>>> df1
    col1     geometry
0  name1  POINT (0 0)
1  name2  POINT (1 1)

Using existing column:

>>> gdf["buffered"] = gdf.buffer(2)
>>> df2 = gdf.set_geometry("buffered")
>>> df2.geometry
0    POLYGON ((3 2, 2.99037 1.80397, 2.96157 1.6098...
1    POLYGON ((4 1, 3.99037 0.80397, 3.96157 0.6098...
Name: buffered, dtype: geometry

Returns

GeoDataFrame

See Also

GeoDataFrame.rename_geometry : rename an active geometry column

rename_geometry(col: str, inplace: Literal[True] = ...) None
rename_geometry(col: str, inplace: Literal[False] = ...) GeoDataFrame

Rename the GeoDataFrame geometry column to the specified name.

By default yields a new object.

The original geometry column is replaced with the input.

Parameters

col : new geometry column label inplace : boolean, default False

Modify the GeoDataFrame in place (do not create a new object)

Examples

>>> from shapely.geometry import Point
>>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]}
>>> df = geopandas.GeoDataFrame(d, crs="EPSG:4326")
>>> df1 = df.rename_geometry('geom1')
>>> df1.geometry.name
'geom1'
>>> df.rename_geometry('geom1', inplace=True)
>>> df.geometry.name
'geom1'

See Also

GeoDataFrame.set_geometry : set the active geometry

property active_geometry_name: Any

Return the name of the active geometry column.

Returns a name if a GeoDataFrame has an active geometry column set, otherwise returns None. The return type is usually a string, but may be an integer, tuple or other hashable, depending on the contents of the dataframe columns.

You can also access the active geometry column using the .geometry property. You can set a GeoSeries to be an active geometry using the set_geometry() method.

Returns

str or other index label supported by pandas

name of an active geometry column or None

See Also

GeoDataFrame.set_geometry : set the active geometry

property crs: pyproj.CRS

The Coordinate Reference System (CRS) represented as a pyproj.CRS object.

Returns

pyproj.CRS | None

CRS assigned to an active geometry column

Examples

>>> gdf.crs
<Geographic 2D CRS: EPSG:4326>
Name: WGS 84
Axis Info [ellipsoidal]:
- Lat[north]: Geodetic latitude (degree)
- Lon[east]: Geodetic longitude (degree)
Area of Use:
- name: World
- bounds: (-180.0, -90.0, 180.0, 90.0)
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich

See Also

GeoDataFrame.set_crs : assign CRS GeoDataFrame.to_crs : re-project to another CRS

property gpu_spatial_index

GPU-resident Hilbert R-tree spatial index, or None if not built.

Built automatically when read_file(..., build_index=True) is used. Can also be built manually via vibespatial.io.gpu_parse.build_spatial_index().

Returns

GpuSpatialIndex or None

The packed Hilbert R-tree spatial index attached to this GeoDataFrame, or None if no index has been built.

classmethod from_dict(data: dict, geometry=None, crs: Any | None = None, **kwargs) GeoDataFrame

Construct GeoDataFrame from dict of array-like or dicts by overriding DataFrame.from_dict method with geometry and crs.

Parameters

datadict

Of the form {field : array-like} or {field : dict}.

geometrystr or array (optional)

If str, column to use as geometry. If array, will be set as ‘geometry’ column on GeoDataFrame.

crsstr or dict (optional)

Coordinate reference system to set on the resulting frame.

kwargskey-word arguments

These arguments are passed to DataFrame.from_dict

Returns

GeoDataFrame

classmethod from_file(filename: os.PathLike | IO, **kwargs) GeoDataFrame

Alternate constructor to create a GeoDataFrame from a file.

It is recommended to use geopandas.read_file() instead.

Can load a GeoDataFrame from a file in any format recognized by pyogrio. See http://pyogrio.readthedocs.io/ for details.

Parameters

filenamestr

File path or file handle to read from. Depending on which kwargs are included, the content of filename may vary. See pyogrio.read_dataframe() for usage details.

kwargskey-word arguments

These arguments are passed to pyogrio.read_dataframe(), and can be used to access multi-layer data, data stored within archives (zip files), etc.

Examples

>>> import geodatasets
>>> path = geodatasets.get_path('nybb')
>>> gdf = geopandas.GeoDataFrame.from_file(path)
>>> gdf
   BoroCode       BoroName     Shape_Leng    Shape_Area                                           geometry
0         5  Staten Island  330470.010332  1.623820e+09  MULTIPOLYGON (((970217.022 145643.332, 970227....
1         4         Queens  896344.047763  3.045213e+09  MULTIPOLYGON (((1029606.077 156073.814, 102957...
2         3       Brooklyn  741080.523166  1.937479e+09  MULTIPOLYGON (((1021176.479 151374.797, 102100...
3         1      Manhattan  359299.096471  6.364715e+08  MULTIPOLYGON (((981219.056 188655.316, 980940....
4         2          Bronx  464392.991824  1.186925e+09  MULTIPOLYGON (((1012821.806 229228.265, 101278...

The recommended method of reading files is geopandas.read_file():

>>> gdf = geopandas.read_file(path)

See Also

read_file : read file to GeoDataFrame GeoDataFrame.to_file : write GeoDataFrame to file

classmethod from_features(features, crs: Any | None = None, columns: collections.abc.Iterable[str] | None = None) GeoDataFrame

Alternate constructor to create GeoDataFrame from an iterable of features or a feature collection.

Parameters

features
  • Iterable of features, where each element must be a feature dictionary or implement the __geo_interface__.

  • Feature collection, where the ‘features’ key contains an iterable of features.

  • Object holding a feature collection that implements the __geo_interface__.

crsstr or dict (optional)

Coordinate reference system to set on the resulting frame.

columnslist of column names, optional

Optionally specify the column names to include in the output frame. This does not overwrite the property names of the input, but can ensure a consistent output format.

Returns

GeoDataFrame

Notes

For more information about the __geo_interface__, see https://gist.github.com/sgillies/2217756

Examples

>>> feature_coll = {
...     "type": "FeatureCollection",
...     "features": [
...         {
...             "id": "0",
...             "type": "Feature",
...             "properties": {"col1": "name1"},
...             "geometry": {"type": "Point", "coordinates": (1.0, 2.0)},
...             "bbox": (1.0, 2.0, 1.0, 2.0),
...         },
...         {
...             "id": "1",
...             "type": "Feature",
...             "properties": {"col1": "name2"},
...             "geometry": {"type": "Point", "coordinates": (2.0, 1.0)},
...             "bbox": (2.0, 1.0, 2.0, 1.0),
...         },
...     ],
...     "bbox": (1.0, 1.0, 2.0, 2.0),
... }
>>> df = geopandas.GeoDataFrame.from_features(feature_coll)
>>> df
      geometry   col1
0  POINT (1 2)  name1
1  POINT (2 1)  name2
classmethod from_postgis(sql: str | sqlalchemy.text, con, geom_col: str = 'geom', crs: Any | None = None, index_col: str | list[str] | None = None, coerce_float: bool = True, parse_dates: list | dict | None = None, params: list | tuple | dict | None = None, chunksize: int | None = None) GeoDataFrame

Alternate constructor to create a GeoDataFrame from a sql query containing a geometry column in WKB representation.

Parameters

sql : string con : sqlalchemy.engine.Connection or sqlalchemy.engine.Engine geom_col : string, default ‘geom’

column name to convert to shapely geometries

crsoptional

Coordinate reference system to use for the returned GeoDataFrame

index_colstring or list of strings, optional, default: None

Column(s) to set as index(MultiIndex)

coerce_floatboolean, default True

Attempt to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets

parse_dateslist or dict, default None
  • List of column names to parse as dates.

  • Dict of {column_name: format string} where format string is strftime compatible in case of parsing string times, or is one of (D, s, ns, ms, us) in case of parsing integer timestamps.

  • Dict of {column_name: arg dict}, where the arg dict corresponds to the keyword arguments of pandas.to_datetime(). Especially useful with databases without native Datetime support, such as SQLite.

paramslist, tuple or dict, optional, default None

List of parameters to pass to execute method.

chunksizeint, default None

If specified, return an iterator where chunksize is the number of rows to include in each chunk.

Examples

PostGIS

>>> from sqlalchemy import create_engine
>>> db_connection_url = "postgresql://myusername:mypassword@myhost:5432/mydb"
>>> con = create_engine(db_connection_url)
>>> sql = "SELECT geom, highway FROM roads"
>>> df = geopandas.GeoDataFrame.from_postgis(sql, con)

SpatiaLite

>>> sql = "SELECT ST_Binary(geom) AS geom, highway FROM roads"
>>> df = geopandas.GeoDataFrame.from_postgis(sql, con)

The recommended method of reading from PostGIS is geopandas.read_postgis():

>>> df = geopandas.read_postgis(sql, con)

See Also

geopandas.read_postgis : read PostGIS database to GeoDataFrame

classmethod from_arrow(table, geometry: str | None = None, to_pandas_kwargs: dict | None = None)

Construct a GeoDataFrame from an Arrow table object based on GeoArrow extension types.

See https://geoarrow.org/ for details on the GeoArrow specification.

This functions accepts any tabular Arrow object implementing the Arrow PyCapsule Protocol (i.e. having an __arrow_c_array__ or __arrow_c_stream__ method).

Added in version 1.0.

Parameters

tablepyarrow.Table or Arrow-compatible table

Any tabular object implementing the Arrow PyCapsule Protocol (i.e. has an __arrow_c_array__ or __arrow_c_stream__ method). This table should have at least one column with a geoarrow geometry type.

geometrystr, default None

The name of the geometry column to set as the active geometry column. If None, the first geometry column found will be used.

to_pandas_kwargsdict, optional

Arguments passed to the pa.Table.to_pandas method for non-geometry columns. This can be used to control the behavior of the conversion of the non-geometry columns to a pandas DataFrame. For example, you can use this to control the dtype conversion of the columns. By default, the to_pandas method is called with no additional arguments.

Returns

GeoDataFrame

See Also

GeoDataFrame.to_arrow GeoSeries.from_arrow

Examples

>>> import geoarrow.pyarrow as ga
>>> import pyarrow as pa
>>> table = pa.Table.from_arrays([
...     ga.as_geoarrow(
...     [None, "POLYGON ((0 0, 1 1, 0 1, 0 0))", "LINESTRING (0 0, -1 1, 0 -1)"]
...     ),
...     pa.array([1, 2, 3]),
...     pa.array(["a", "b", "c"]),
... ], names=["geometry", "id", "value"])
>>> gdf = geopandas.GeoDataFrame.from_arrow(table)
>>> gdf
                           geometry   id  value
0                              None    1      a
1    POLYGON ((0 0, 1 1, 0 1, 0 0))    2      b
2      LINESTRING (0 0, -1 1, 0 -1)    3      c
to_json(na: Literal['null', 'drop', 'keep'] = 'null', show_bbox: bool = False, drop_id: bool = False, to_wgs84: bool = False, **kwargs) str

Return a GeoJSON representation of the GeoDataFrame as a string.

Parameters

na{‘null’, ‘drop’, ‘keep’}, default ‘null’

Indicates how to output missing (NaN) values in the GeoDataFrame. See below.

show_bboxbool, optional, default: False

Include bbox (bounds) in the geojson

drop_idbool, default: False

Whether to retain the index of the GeoDataFrame as the id property in the generated GeoJSON. Default is False, but may want True if the index is just arbitrary row numbers.

to_wgs84: bool, optional, default: False

If the CRS is set on the active geometry column it is exported as WGS84 (EPSG:4326) to meet the 2016 GeoJSON specification. Set to True to force re-projection and set to False to ignore CRS. False by default.

Notes

The remaining kwargs are passed to json.dumps().

Missing (NaN) values in the GeoDataFrame can be represented as follows:

  • null: output the missing entries as JSON null.

  • drop: remove the property from the feature. This applies to each feature individually so that features may have different properties.

  • keep: output the missing entries as NaN.

If the GeoDataFrame has a defined CRS, its definition will be included in the output unless it is equal to WGS84 (default GeoJSON CRS) or not possible to represent in the URN OGC format, or unless to_wgs84=True is specified.

Examples

>>> from shapely.geometry import Point
>>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]}
>>> gdf = geopandas.GeoDataFrame(d, crs="EPSG:3857")
>>> gdf
    col1     geometry
0  name1  POINT (1 2)
1  name2  POINT (2 1)
>>> gdf.to_json()
'{"type": "FeatureCollection", "features": [{"id": "0", "type": "Feature", "properties": {"col1": "name1"}, "geometry": {"type": "Point", "coordinates": [1.0, 2.0]}}, {"id": "1", "type": "Feature", "properties": {"col1": "name2"}, "geometry": {"type": "Point", "coordinates": [2.0, 1.0]}}], "crs": {"type": "name", "properties": {"name": "urn:ogc:def:crs:EPSG::3857"}}}'

Alternatively, you can write GeoJSON to file:

>>> gdf.to_file(path, driver="GeoJSON")

See Also

GeoDataFrame.to_file : write GeoDataFrame to file

iterfeatures(na: str = 'null', show_bbox: bool = False, drop_id: bool = False) Generator[dict]

Return an iterator that yields feature dictionaries that comply with __geo_interface__.

Parameters

nastr, optional

Options are {‘null’, ‘drop’, ‘keep’}, default ‘null’. Indicates how to output missing (NaN) values in the GeoDataFrame

  • null: output the missing entries as JSON null

  • drop: remove the property from the feature. This applies to each feature individually so that features may have different properties

  • keep: output the missing entries as NaN

show_bboxbool, optional

Include bbox (bounds) in the geojson. Default False.

drop_idbool, default: False

Whether to retain the index of the GeoDataFrame as the id property in the generated GeoJSON. Default is False, but may want True if the index is just arbitrary row numbers.

Examples

>>> from shapely.geometry import Point
>>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]}
>>> gdf = geopandas.GeoDataFrame(d, crs="EPSG:4326")
>>> gdf
    col1     geometry
0  name1  POINT (1 2)
1  name2  POINT (2 1)
>>> feature = next(gdf.iterfeatures())
>>> feature
{'id': '0', 'type': 'Feature', 'properties': {'col1': 'name1'}, 'geometry': {'type': 'Point', 'coordinates': (1.0, 2.0)}}
to_geo_dict(na: str | None = 'null', show_bbox: bool = False, drop_id: bool = False) dict

Return a python feature collection representation of the GeoDataFrame as a dictionary with a list of features based on the __geo_interface__ GeoJSON-like specification.

Parameters

nastr, optional

Options are {‘null’, ‘drop’, ‘keep’}, default ‘null’. Indicates how to output missing (NaN) values in the GeoDataFrame

  • null: output the missing entries as JSON null

  • drop: remove the property from the feature. This applies to each feature individually so that features may have different properties

  • keep: output the missing entries as NaN

show_bboxbool, optional

Include bbox (bounds) in the geojson. Default False.

drop_idbool, default: False

Whether to retain the index of the GeoDataFrame as the id property in the generated dictionary. Default is False, but may want True if the index is just arbitrary row numbers.

Examples

>>> from shapely.geometry import Point
>>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]}
>>> gdf = geopandas.GeoDataFrame(d)
>>> gdf
    col1     geometry
0  name1  POINT (1 2)
1  name2  POINT (2 1)
>>> gdf.to_geo_dict()
{'type': 'FeatureCollection', 'features': [{'id': '0', 'type': 'Feature', 'properties': {'col1': 'name1'}, 'geometry': {'type': 'Point', 'coordinates': (1.0, 2.0)}}, {'id': '1', 'type': 'Feature', 'properties': {'col1': 'name2'}, 'geometry': {'type': 'Point', 'coordinates': (2.0, 1.0)}}]}

See Also

GeoDataFrame.to_json : return a GeoDataFrame as a GeoJSON string

to_wkb(hex: bool = False, **kwargs) pandas.DataFrame

Encode all geometry columns in the GeoDataFrame to WKB.

Parameters

hexbool

If true, export the WKB as a hexadecimal string. The default is to return a binary bytes object.

kwargs

Additional keyword args will be passed to shapely.to_wkb().

Returns

DataFrame

geometry columns are encoded to WKB

to_wkt(**kwargs) pandas.DataFrame

Encode all geometry columns in the GeoDataFrame to WKT.

Parameters

kwargs

Keyword args will be passed to shapely.to_wkt().

Returns

DataFrame

geometry columns are encoded to WKT

to_arrow(*, index: bool | None = None, geometry_encoding: vibespatial.api.io.arrow.PARQUET_GEOMETRY_ENCODINGS = 'WKB', interleaved: bool = True, include_z: bool | None = None)

Encode a GeoDataFrame to GeoArrow format.

See https://geoarrow.org/ for details on the GeoArrow specification.

This function returns a generic Arrow data object implementing the Arrow PyCapsule Protocol (i.e. having an __arrow_c_stream__ method). This object can then be consumed by your Arrow implementation of choice that supports this protocol.

Added in version 1.0.

Parameters

indexbool, default None

If True, always include the dataframe’s index(es) as columns in the file output. If False, the index(es) will not be written to the file. If None, the index(ex) will be included as columns in the file output except RangeIndex which is stored as metadata only.

geometry_encoding{‘WKB’, ‘geoarrow’ }, default ‘WKB’

The GeoArrow encoding to use for the data conversion.

interleavedbool, default True

Only relevant for ‘geoarrow’ encoding. If True, the geometries’ coordinates are interleaved in a single fixed size list array. If False, the coordinates are stored as separate arrays in a struct type.

include_zbool, default None

Only relevant for ‘geoarrow’ encoding (for WKB, the dimensionality of the individual geometries is preserved). If False, return 2D geometries. If True, include the third dimension in the output (if a geometry has no third dimension, the z-coordinates will be NaN). By default, will infer the dimensionality from the input geometries. Note that this inference can be unreliable with empty geometries (for a guaranteed result, it is recommended to specify the keyword).

Returns

ArrowTable

A generic Arrow table object with geometry columns encoded to GeoArrow.

Examples

>>> from shapely.geometry import Point
>>> data = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]}
>>> gdf = geopandas.GeoDataFrame(data)
>>> gdf
    col1     geometry
0  name1  POINT (1 2)
1  name2  POINT (2 1)
>>> arrow_table = gdf.to_arrow()
>>> arrow_table
<geopandas.io._geoarrow.ArrowTable object at ...>

The returned data object needs to be consumed by a library implementing the Arrow PyCapsule Protocol. For example, wrapping the data as a pyarrow.Table (requires pyarrow >= 14.0):

>>> import pyarrow as pa
>>> table = pa.table(arrow_table)
>>> table
pyarrow.Table
col1: large_string
geometry: extension<geoarrow.wkb<WkbType>>
----
col1: [["name1","name2"]]
geometry: [[0101000000000000000000F03F0000000000000040,01010000000000000000000040000000000000F03F]]
to_parquet(path: os.PathLike | IO, index: bool | None = None, compression: str | None = 'snappy', geometry_encoding: vibespatial.api.io.arrow.PARQUET_GEOMETRY_ENCODINGS = 'WKB', write_covering_bbox: bool = False, schema_version: vibespatial.api.io.arrow.SUPPORTED_VERSIONS_LITERAL | None = None, **kwargs) None

Write a GeoDataFrame to the Parquet format.

By default, all geometry columns present are serialized to WKB format in the file.

Requires ‘pyarrow’.

Added in version 0.8.

Parameters

path : str, path object index : bool, default None

If True, always include the dataframe’s index(es) as columns in the file output. If False, the index(es) will not be written to the file. If None, the index(ex) will be included as columns in the file output except RangeIndex which is stored as metadata only.

compression{‘snappy’, ‘gzip’, ‘brotli’, ‘lz4’, ‘zstd’, None}, default ‘snappy’

Name of the compression to use. Use None for no compression.

geometry_encoding{‘WKB’, ‘geoarrow’}, default ‘WKB’

The encoding to use for the geometry columns. Defaults to “WKB” for maximum interoperability. Specify “geoarrow” to use one of the native GeoArrow-based single-geometry type encodings. Note: the “geoarrow” option is part of the newer GeoParquet 1.1 specification, should be considered as experimental, and may not be supported by all readers.

write_covering_bboxbool, default False

Writes the bounding box column for each row entry with column name ‘bbox’. Writing a bbox column can be computationally expensive, but allows you to specify a bbox in : func:read_parquet for filtered reading. Note: this bbox column is part of the newer GeoParquet 1.1 specification and should be considered as experimental. While writing the column is backwards compatible, using it for filtering may not be supported by all readers.

schema_version{‘0.1.0’, ‘0.4.0’, ‘1.0.0’, ‘1.1.0’, None}

GeoParquet specification version; if not provided, will default to latest supported stable version (1.0.0).

kwargs

Additional keyword arguments passed to pyarrow.parquet.write_table().

Examples

>>> gdf.to_parquet('data.parquet')

See Also

GeoDataFrame.to_feather : write GeoDataFrame to feather GeoDataFrame.to_file : write GeoDataFrame to file

to_feather(path: os.PathLike, index: bool | None = None, compression: str | None = None, schema_version: vibespatial.api.io.arrow.SUPPORTED_VERSIONS_LITERAL | None = None, **kwargs)

Write a GeoDataFrame to the Feather format.

Any geometry columns present are serialized to WKB format in the file.

Requires ‘pyarrow’ >= 0.17.

Added in version 0.8.

Parameters

path : str, path object index : bool, default None

If True, always include the dataframe’s index(es) as columns in the file output. If False, the index(es) will not be written to the file. If None, the index(ex) will be included as columns in the file output except RangeIndex which is stored as metadata only.

compression{‘zstd’, ‘lz4’, ‘uncompressed’}, optional

Name of the compression to use. Use "uncompressed" for no compression. By default uses LZ4 if available, otherwise uncompressed.

schema_version{‘0.1.0’, ‘0.4.0’, ‘1.0.0’, ‘1.1.0’ None}

GeoParquet specification version; if not provided will default to latest supported stable version (1.0.0).

kwargs

Additional keyword arguments passed to pyarrow.feather.write_feather().

Examples

>>> gdf.to_feather('data.feather')

See Also

GeoDataFrame.to_parquet : write GeoDataFrame to parquet GeoDataFrame.to_file : write GeoDataFrame to file

to_file(filename: os.PathLike | IO, driver: str | None = None, schema: dict | None = None, index: bool | None = None, **kwargs)

Write the GeoDataFrame to a file.

By default, an ESRI shapefile is written, but any OGR data source supported by Pyogrio or Fiona can be written. A dictionary of supported OGR providers is available via:

>>> import pyogrio
>>> pyogrio.list_drivers()

Parameters

filenamestring

File path or file handle to write to. The path may specify a GDAL VSI scheme.

driverstring, default None

The OGR format driver used to write the vector file. If not specified, it attempts to infer it from the file extension. If no extension is specified, it saves ESRI Shapefile to a folder.

schemadict, default None

If specified, the schema dictionary is passed to Fiona to better control how the file is written. If None, GeoPandas will determine the schema based on each column’s dtype. Not supported for the “pyogrio” engine.

indexbool, default None

If True, write index into one or more columns (for MultiIndex). Default None writes the index into one or more columns only if the index is named, is a MultiIndex, or has a non-integer data type. If False, no index is written.

Added in version 0.7: Previously the index was not written.

modestring, default ‘w’

The write mode, ‘w’ to overwrite the existing file and ‘a’ to append. Not all drivers support appending. The drivers that support appending are listed in fiona.supported_drivers or https://github.com/Toblerity/Fiona/blob/master/fiona/drvsupport.py

crspyproj.CRS, default None

If specified, the CRS is passed to Fiona to better control how the file is written. If None, GeoPandas will determine the crs based on crs df attribute. The value can be anything accepted by pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string. The keyword is not supported for the “pyogrio” engine.

enginestr, “pyogrio” or “fiona”

The underlying library that is used to write the file. Currently, the supported options are “pyogrio” and “fiona”. Defaults to “pyogrio” if installed, otherwise tries “fiona”.

metadatadict[str, str], default None

Optional metadata to be stored in the file. Keys and values must be strings. Supported only for “GPKG” driver.

**kwargs :

Keyword args to be passed to the engine, and can be used to write to multi-layer data, store data within archives (zip files), etc. In case of the “pyogrio” engine, the keyword arguments are passed to pyogrio.write_dataframe. In case of the “fiona” engine, the keyword arguments are passed to fiona.open`. For more information on possible keywords, type: import pyogrio; help(pyogrio.write_dataframe).

Notes

The format drivers will attempt to detect the encoding of your data, but may fail. In this case, the proper encoding can be specified explicitly by using the encoding keyword parameter, e.g. encoding='utf-8'.

See Also

GeoSeries.to_file GeoDataFrame.to_postgis : write GeoDataFrame to PostGIS database GeoDataFrame.to_parquet : write GeoDataFrame to parquet GeoDataFrame.to_feather : write GeoDataFrame to feather

Examples

>>> gdf.to_file('dataframe.shp')
>>> gdf.to_file('dataframe.gpkg', driver='GPKG', layer='name')
>>> gdf.to_file('dataframe.geojson', driver='GeoJSON')

With selected drivers you can also append to a file with mode=”a”:

>>> gdf.to_file('dataframe.shp', mode="a")

Using the engine-specific keyword arguments it is possible to e.g. create a spatialite file with a custom layer name:

>>> gdf.to_file(
...     'dataframe.sqlite', driver='SQLite', spatialite=True, layer='test'
... )
set_crs(crs: Any | None = ..., epsg: int | None = ..., inplace: Literal[True] = ..., allow_override: bool = ...) None
set_crs(crs: Any | None = ..., epsg: int | None = ..., inplace: Literal[False] = ..., allow_override: bool = ...) GeoDataFrame

Set the Coordinate Reference System (CRS) of the GeoDataFrame.

If there are multiple geometry columns within the GeoDataFrame, only the CRS of the active geometry column is set.

Pass None to remove CRS from the active geometry column.

Notes

The underlying geometries are not transformed to this CRS. To transform the geometries to a new CRS, use the to_crs method.

Parameters

crspyproj.CRS | None, optional

The value can be anything accepted by pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.

epsgint, optional

EPSG code specifying the projection.

inplacebool, default False

If True, the CRS of the GeoDataFrame will be changed in place (while still returning the result) instead of making a copy of the GeoDataFrame.

allow_overridebool, default False

If the the GeoDataFrame already has a CRS, allow to replace the existing CRS, even when both are not equal.

Examples

>>> from shapely.geometry import Point
>>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]}
>>> gdf = geopandas.GeoDataFrame(d)
>>> gdf
    col1     geometry
0  name1  POINT (1 2)
1  name2  POINT (2 1)

Setting CRS to a GeoDataFrame without one:

>>> gdf.crs is None
True
>>> gdf = gdf.set_crs('epsg:3857')
>>> gdf.crs
<Projected CRS: EPSG:3857>
Name: WGS 84 / Pseudo-Mercator
Axis Info [cartesian]:
- X[east]: Easting (metre)
- Y[north]: Northing (metre)
Area of Use:
- name: World - 85°S to 85°N
- bounds: (-180.0, -85.06, 180.0, 85.06)
Coordinate Operation:
- name: Popular Visualisation Pseudo-Mercator
- method: Popular Visualisation Pseudo Mercator
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich

Overriding existing CRS:

>>> gdf = gdf.set_crs(4326, allow_override=True)

Without allow_override=True, set_crs returns an error if you try to override CRS.

See Also

GeoDataFrame.to_crs : re-project to another CRS

to_crs(crs: Any | None = ..., epsg: int | None = ..., inplace: Literal[False] = ...) GeoDataFrame
to_crs(crs: Any | None = ..., epsg: int | None = ..., inplace: Literal[True] = ...) None

Transform geometries to a new coordinate reference system.

Transform all geometries in an active geometry column to a different coordinate reference system. The crs attribute on the current GeoSeries must be set. Either crs or epsg may be specified for output.

This method will transform all points in all objects. It has no notion of projecting entire geometries. All segments joining points are assumed to be lines in the current projection, not geodesics. Objects crossing the dateline (or other projection boundary) will have undesirable behavior.

Parameters

crspyproj.CRS, optional if epsg is specified

The value can be anything accepted by pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.

epsgint, optional if crs is specified

EPSG code specifying output projection.

inplacebool, optional, default: False

Whether to return a new GeoDataFrame or do the transformation in place.

Returns

GeoDataFrame

Examples

>>> from shapely.geometry import Point
>>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]}
>>> gdf = geopandas.GeoDataFrame(d, crs=4326)
>>> gdf
    col1     geometry
0  name1  POINT (1 2)
1  name2  POINT (2 1)
>>> gdf.crs
<Geographic 2D CRS: EPSG:4326>
Name: WGS 84
Axis Info [ellipsoidal]:
- Lat[north]: Geodetic latitude (degree)
- Lon[east]: Geodetic longitude (degree)
Area of Use:
- name: World
- bounds: (-180.0, -90.0, 180.0, 90.0)
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
>>> gdf = gdf.to_crs(3857)
>>> gdf
    col1                       geometry
0  name1  POINT (111319.491 222684.209)
1  name2  POINT (222638.982 111325.143)
>>> gdf.crs
<Projected CRS: EPSG:3857>
Name: WGS 84 / Pseudo-Mercator
Axis Info [cartesian]:
- X[east]: Easting (metre)
- Y[north]: Northing (metre)
Area of Use:
- name: World - 85°S to 85°N
- bounds: (-180.0, -85.06, 180.0, 85.06)
Coordinate Operation:
- name: Popular Visualisation Pseudo-Mercator
- method: Popular Visualisation Pseudo Mercator
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich

See Also

GeoDataFrame.set_crs : assign CRS without re-projection

estimate_utm_crs(datum_name: str = 'WGS 84') pyproj.CRS

Return the estimated UTM CRS based on the bounds of the dataset.

Added in version 0.9.

Parameters

datum_namestr, optional

The name of the datum to use in the query. Default is WGS 84.

Returns

pyproj.CRS

Examples

>>> import geodatasets
>>> df = geopandas.read_file(
...     geodatasets.get_path("geoda.chicago_health")
... )
>>> df.estimate_utm_crs()
<Derived Projected CRS: EPSG:32616>
Name: WGS 84 / UTM zone 16N
Axis Info [cartesian]:
- E[east]: Easting (metre)
- N[north]: Northing (metre)
Area of Use:
- name: Between 90°W and 84°W, northern hemisphere between equator and 84°N...
- bounds: (-90.0, 0.0, -84.0, 84.0)
Coordinate Operation:
- name: UTM zone 16N
- method: Transverse Mercator
Datum: World Geodetic System 1984 ensemble
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
property loc

Access a group of rows and columns by label(s) or a boolean array.

.loc[] is primarily label based, but may also be used with a boolean array.

Allowed inputs are:

  • A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index, and never as an integer position along the index).

  • A list or array of labels, e.g. ['a', 'b', 'c'].

  • A slice object with labels, e.g. 'a':'f'.

    Warning

    Note that contrary to usual python slices, both the start and the stop are included

  • A boolean array of the same length as the axis being sliced, e.g. [True, False, True].

  • An alignable boolean Series. The index of the key will be aligned before masking.

  • An alignable Index. The Index of the returned selection will be the input.

  • A callable function with one argument (the calling Series or DataFrame) and that returns valid output for indexing (one of the above)

See more at Selection by Label.

Raises

KeyError

If any items are not found.

IndexingError

If an indexed key is passed and its index is unalignable to the frame index.

See Also

DataFrame.at : Access a single value for a row/column label pair. DataFrame.iloc : Access group of rows and columns by integer position(s). DataFrame.xs : Returns a cross-section (row(s) or column(s)) from the

Series/DataFrame.

Series.loc : Access group of values using labels.

Examples

Getting values

>>> df = pd.DataFrame(
...     [[1, 2], [4, 5], [7, 8]],
...     index=["cobra", "viper", "sidewinder"],
...     columns=["max_speed", "shield"],
... )
>>> df
            max_speed  shield
cobra               1       2
viper               4       5
sidewinder          7       8

Single label. Note this returns the row as a Series.

>>> df.loc["viper"]
max_speed    4
shield       5
Name: viper, dtype: int64

List of labels. Note using [[]] returns a DataFrame.

>>> df.loc[["viper", "sidewinder"]]
            max_speed  shield
viper               4       5
sidewinder          7       8

Single label for row and column

>>> df.loc["cobra", "shield"]
np.int64(2)

Slice with labels for row and single label for column. As mentioned above, note that both the start and stop of the slice are included.

>>> df.loc["cobra":"viper", "max_speed"]
cobra    1
viper    4
Name: max_speed, dtype: int64

Boolean list with the same length as the row axis

>>> df.loc[[False, False, True]]
            max_speed  shield
sidewinder          7       8

Alignable boolean Series:

>>> df.loc[
...     pd.Series([False, True, False], index=["viper", "sidewinder", "cobra"])
... ]
                     max_speed  shield
sidewinder          7       8

Index (same behavior as df.reindex)

>>> df.loc[pd.Index(["cobra", "viper"], name="foo")]
       max_speed  shield
foo
cobra          1       2
viper          4       5

Conditional that returns a boolean Series

>>> df.loc[df["shield"] > 6]
            max_speed  shield
sidewinder          7       8

Conditional that returns a boolean Series with column labels specified

>>> df.loc[df["shield"] > 6, ["max_speed"]]
            max_speed
sidewinder          7

Multiple conditional using & that returns a boolean Series

>>> df.loc[(df["max_speed"] > 1) & (df["shield"] < 8)]
            max_speed  shield
viper          4       5

Multiple conditional using | that returns a boolean Series

>>> df.loc[(df["max_speed"] > 4) | (df["shield"] < 5)]
            max_speed  shield
cobra               1       2
sidewinder          7       8

Please ensure that each condition is wrapped in parentheses (). See the user guide for more details and explanations of Boolean indexing.

Note

If you find yourself using 3 or more conditionals in .loc[], consider using advanced indexing.

See below for using .loc[] on MultiIndex DataFrames.

Callable that returns a boolean Series

>>> df.loc[lambda df: df["shield"] == 8]
            max_speed  shield
sidewinder          7       8

Setting values

Set value for all items matching the list of labels

>>> df.loc[["viper", "sidewinder"], ["shield"]] = 50
>>> df
            max_speed  shield
cobra               1       2
viper               4      50
sidewinder          7      50

Set value for an entire row

>>> df.loc["cobra"] = 10
>>> df
            max_speed  shield
cobra              10      10
viper               4      50
sidewinder          7      50

Set value for an entire column

>>> df.loc[:, "max_speed"] = 30
>>> df
            max_speed  shield
cobra              30      10
viper              30      50
sidewinder         30      50

Set value for rows matching callable condition

>>> df.loc[df["shield"] > 35] = 0
>>> df
            max_speed  shield
cobra              30      10
viper               0       0
sidewinder          0       0

Add value matching location

>>> df.loc["viper", "shield"] += 5
>>> df
            max_speed  shield
cobra              30      10
viper               0       5
sidewinder          0       0

Setting using a Series or a DataFrame sets the values matching the index labels, not the index positions.

>>> shuffled_df = df.loc[["viper", "cobra", "sidewinder"]]
>>> df.loc[:] += shuffled_df
>>> df
            max_speed  shield
cobra              60      20
viper               0      10
sidewinder          0       0

Getting values on a DataFrame with an index that has integer labels

Another example using integers for the index

>>> df = pd.DataFrame(
...     [[1, 2], [4, 5], [7, 8]],
...     index=[7, 8, 9],
...     columns=["max_speed", "shield"],
... )
>>> df
   max_speed  shield
7          1       2
8          4       5
9          7       8

Slice with integer labels for rows. As mentioned above, note that both the start and stop of the slice are included.

>>> df.loc[7:9]
   max_speed  shield
7          1       2
8          4       5
9          7       8

Getting values with a MultiIndex

A number of examples using a DataFrame with a MultiIndex

>>> tuples = [
...     ("cobra", "mark i"),
...     ("cobra", "mark ii"),
...     ("sidewinder", "mark i"),
...     ("sidewinder", "mark ii"),
...     ("viper", "mark ii"),
...     ("viper", "mark iii"),
... ]
>>> index = pd.MultiIndex.from_tuples(tuples)
>>> values = [[12, 2], [0, 4], [10, 20], [1, 4], [7, 1], [16, 36]]
>>> df = pd.DataFrame(values, columns=["max_speed", "shield"], index=index)
>>> df
                     max_speed  shield
cobra      mark i           12       2
           mark ii           0       4
sidewinder mark i           10      20
           mark ii           1       4
viper      mark ii           7       1
           mark iii         16      36

Single label. Note this returns a DataFrame with a single index.

>>> df.loc["cobra"]
         max_speed  shield
mark i          12       2
mark ii          0       4

Single index tuple. Note this returns a Series.

>>> df.loc[("cobra", "mark ii")]
max_speed    0
shield       4
Name: (cobra, mark ii), dtype: int64

Single label for row and column. Similar to passing in a tuple, this returns a Series.

>>> df.loc["cobra", "mark i"]
max_speed    12
shield        2
Name: (cobra, mark i), dtype: int64

Single tuple. Note using [[]] returns a DataFrame.

>>> df.loc[[("cobra", "mark ii")]]
               max_speed  shield
cobra mark ii          0       4

Single tuple for the index with a single label for the column

>>> df.loc[("cobra", "mark i"), "shield"]
np.int64(2)

Slice from index tuple to single label

>>> df.loc[("cobra", "mark i") : "viper"]
                     max_speed  shield
cobra      mark i           12       2
           mark ii           0       4
sidewinder mark i           10      20
           mark ii           1       4
viper      mark ii           7       1
           mark iii         16      36

Slice from index tuple to index tuple

>>> df.loc[("cobra", "mark i") : ("viper", "mark ii")]
                    max_speed  shield
cobra      mark i          12       2
           mark ii          0       4
sidewinder mark i          10      20
           mark ii          1       4
viper      mark ii          7       1

Please see the user guide for more details and explanations of advanced indexing.

Assignment with Series

When assigning a Series to .loc[row_indexer, col_indexer], pandas aligns the Series by index labels, not by order or position.

Series assignment with .loc and index alignment:

>>> df = pd.DataFrame({"A": [1, 2, 3]}, index=[0, 1, 2])
>>> s = pd.Series([10, 20], index=[1, 0])  # Note reversed order
>>> df.loc[:, "B"] = s  # Aligns by index, not order
>>> df
   A   B
0  1  20.0
1  2  10.0
2  3 NaN
property iloc

Purely integer-location based indexing for selection by position.

Changed in version 3.0: Callables which return a tuple are deprecated as input.

.iloc[] is primarily integer position based (from 0 to length-1 of the axis), but may also be used with a boolean array.

Allowed inputs are:

  • An integer, e.g. 5.

  • A list or array of integers, e.g. [4, 3, 0].

  • A slice object with ints, e.g. 1:7.

  • A boolean array.

  • A callable function with one argument (the calling Series or DataFrame) and that returns valid output for indexing (one of the above). This is useful in method chains, when you don’t have a reference to the calling object, but would like to base your selection on some value.

  • A tuple of row and column indexes. The tuple elements consist of one of the above inputs, e.g. (0, 1).

.iloc will raise IndexError if a requested indexer is out-of-bounds, except slice indexers which allow out-of-bounds indexing (this conforms with python/numpy slice semantics).

See more at Selection by Position.

See Also

DataFrame.iat : Fast integer location scalar accessor. DataFrame.loc : Purely label-location based indexer for selection by label. Series.iloc : Purely integer-location based indexing for

selection by position.

Examples

>>> mydict = [
...     {"a": 1, "b": 2, "c": 3, "d": 4},
...     {"a": 100, "b": 200, "c": 300, "d": 400},
...     {"a": 1000, "b": 2000, "c": 3000, "d": 4000},
... ]
>>> df = pd.DataFrame(mydict)
>>> df
      a     b     c     d
0     1     2     3     4
1   100   200   300   400
2  1000  2000  3000  4000

Indexing just the rows

With a scalar integer.

>>> type(df.iloc[0])
<class 'pandas.Series'>
>>> df.iloc[0]
a    1
b    2
c    3
d    4
Name: 0, dtype: int64

With a list of integers.

>>> df.iloc[[0]]
   a  b  c  d
0  1  2  3  4
>>> type(df.iloc[[0]])
<class 'pandas.DataFrame'>
>>> df.iloc[[0, 1]]
     a    b    c    d
0    1    2    3    4
1  100  200  300  400

With a slice object.

>>> df.iloc[:3]
      a     b     c     d
0     1     2     3     4
1   100   200   300   400
2  1000  2000  3000  4000

With a boolean mask the same length as the index.

>>> df.iloc[[True, False, True]]
      a     b     c     d
0     1     2     3     4
2  1000  2000  3000  4000

With a callable, useful in method chains. The x passed to the lambda is the DataFrame being sliced. This selects the rows whose index label even.

>>> df.iloc[lambda x: x.index % 2 == 0]
      a     b     c     d
0     1     2     3     4
2  1000  2000  3000  4000

Indexing both axes

You can mix the indexer types for the index and columns. Use : to select the entire axis.

With scalar integers.

>>> df.iloc[0, 1]
np.int64(2)

With lists of integers.

>>> df.iloc[[0, 2], [1, 3]]
      b     d
0     2     4
2  2000  4000

With slice objects.

>>> df.iloc[1:3, 0:3]
      a     b     c
1   100   200   300
2  1000  2000  3000

With a boolean array whose length matches the columns.

>>> df.iloc[:, [True, False, True, False]]
      a     c
0     1     3
1   100   300
2  1000  3000

With a callable function that expects the Series or DataFrame.

>>> df.iloc[:, lambda df: [0, 2]]
      a     c
0     1     3
1   100   300
2  1000  3000
property at

Access a single value for a row/column label pair.

Similar to loc, in that both provide label-based lookups. Use at if you only need to get or set a single value in a DataFrame or Series.

Raises

KeyError

If getting a value and ‘label’ does not exist in a DataFrame or Series.

ValueError

If row/column label pair is not a tuple or if any label from the pair is not a scalar for DataFrame. If label is list-like (excluding NamedTuple) for Series.

See Also

DataFrame.at : Access a single value for a row/column pair by label. DataFrame.iat : Access a single value for a row/column pair by integer

position.

DataFrame.loc : Access a group of rows and columns by label(s). DataFrame.iloc : Access a group of rows and columns by integer

position(s).

Series.at : Access a single value by label. Series.iat : Access a single value by integer position. Series.loc : Access a group of rows by label(s). Series.iloc : Access a group of rows by integer position(s).

Notes

See Fast scalar value getting and setting for more details.

Examples

>>> df = pd.DataFrame(
...     [[0, 2, 3], [0, 4, 1], [10, 20, 30]],
...     index=[4, 5, 6],
...     columns=["A", "B", "C"],
... )
>>> df
    A   B   C
4   0   2   3
5   0   4   1
6  10  20  30

Get value at specified row/column pair

>>> df.at[4, "B"]
np.int64(2)

Set value at specified row/column pair

>>> df.at[4, "B"] = 10
>>> df.at[4, "B"]
np.int64(10)

Get value within a Series

>>> df.loc[5].at["B"]
np.int64(4)
property iat

Access a single value for a row/column pair by integer position.

Similar to iloc, in that both provide integer-based lookups. Use iat if you only need to get or set a single value in a DataFrame or Series.

Raises

IndexError

When integer position is out of bounds.

See Also

DataFrame.at : Access a single value for a row/column label pair. DataFrame.loc : Access a group of rows and columns by label(s). DataFrame.iloc : Access a group of rows and columns by integer position(s).

Examples

>>> df = pd.DataFrame(
...     [[0, 2, 3], [0, 4, 1], [10, 20, 30]], columns=["A", "B", "C"]
... )
>>> df
    A   B   C
0   0   2   3
1   0   4   1
2  10  20  30

Get value at specified row/column pair

>>> df.iat[1, 2]
np.int64(1)

Set value at specified row/column pair

>>> df.iat[1, 2] = 10
>>> df.iat[1, 2]
np.int64(10)

Get value within a series

>>> df.loc[0].iat[1]
np.int64(2)
insert(loc: int, column, value, allow_duplicates=lib.no_default) None

Insert column into DataFrame at specified location.

Raises a ValueError if column is already contained in the DataFrame, unless allow_duplicates is set to True.

Parameters

locint

Insertion index. Must verify 0 <= loc <= len(columns).

columnstr, number, or hashable object

Label of the inserted column.

valueScalar, Series, or array-like

Content of the inserted column.

allow_duplicatesbool, optional, default lib.no_default

Allow duplicate column labels to be created.

See Also

Index.insert : Insert new item by index.

Examples

>>> df = pd.DataFrame({"col1": [1, 2], "col2": [3, 4]})
>>> df
   col1  col2
0     1     3
1     2     4
>>> df.insert(1, "newcol", [99, 99])
>>> df
   col1  newcol  col2
0     1      99     3
1     2      99     4
>>> df.insert(0, "col1", [100, 100], allow_duplicates=True)
>>> df
   col1  col1  newcol  col2
0   100     1      99     3
1   100     2      99     4

Notice that pandas uses index alignment in case of value from type Series:

>>> df.insert(0, "col0", pd.Series([5, 6], index=[1, 2]))
>>> df
   col0  col1  col1  newcol  col2
0   NaN   100     1      99     3
1   5.0   100     2      99     4
pop(item)

Return item and drop it from DataFrame. Raise KeyError if not found.

Parameters

itemlabel

Label of column to be popped.

Returns

Series

Series representing the item that is dropped.

See Also

DataFrame.drop: Drop specified labels from rows or columns. DataFrame.drop_duplicates: Return DataFrame with duplicate rows removed.

Examples

>>> df = pd.DataFrame(
...     [
...         ("falcon", "bird", 389.0),
...         ("parrot", "bird", 24.0),
...         ("lion", "mammal", 80.5),
...         ("monkey", "mammal", np.nan),
...     ],
...     columns=("name", "class", "max_speed"),
... )
>>> df
     name   class  max_speed
0  falcon    bird      389.0
1  parrot    bird       24.0
2    lion  mammal       80.5
3  monkey  mammal        NaN
>>> df.pop("class")
0      bird
1      bird
2    mammal
3    mammal
Name: class, dtype: str
>>> df
     name  max_speed
0  falcon      389.0
1  parrot       24.0
2    lion       80.5
3  monkey        NaN
rename(mapper=None, *, index=None, columns=None, axis=None, copy=lib.no_default, inplace: bool = False, level=None, errors: str = 'ignore')

Rename columns or index labels.

Function / dict values must be unique (1-to-1). Labels not contained in a dict / Series will be left as-is. Extra labels listed don’t throw an error.

See the user guide for more.

Parameters

mapperdict-like or function

Dict-like or function transformations to apply to that axis’ values. Use either mapper and axis to specify the axis to target with mapper, or index and columns.

indexdict-like or function

Alternative to specifying axis (mapper, axis=0 is equivalent to index=mapper).

columnsdict-like or function

Alternative to specifying axis (mapper, axis=1 is equivalent to columns=mapper).

axis{0 or ‘index’, 1 or ‘columns’}, default 0

Axis to target with mapper. Can be either the axis name (‘index’, ‘columns’) or number (0, 1). The default is ‘index’.

copybool, default False

This keyword is now ignored; changing its value will have no impact on the method.

Deprecated since version 3.0.0: This keyword is ignored and will be removed in pandas 4.0. Since pandas 3.0, this method always returns a new object using a lazy copy mechanism that defers copies until necessary (Copy-on-Write). See the user guide on Copy-on-Write for more details.

inplacebool, default False

Whether to modify the DataFrame rather than creating a new one. If True then value of copy is ignored.

levelint or level name, default None

In case of a MultiIndex, only rename labels in the specified level.

errors{‘ignore’, ‘raise’}, default ‘ignore’

If ‘raise’, raise a KeyError when a dict-like mapper, index, or columns contains labels that are not present in the Index being transformed. If ‘ignore’, existing keys will be renamed and extra keys will be ignored.

Returns

DataFrame or None

DataFrame with the renamed axis labels or None if inplace=True.

Raises

KeyError

If any of the labels is not found in the selected axis and “errors=’raise’”.

See Also

DataFrame.rename_axis : Set the name of the axis.

Examples

DataFrame.rename supports two calling conventions

  • (index=index_mapper, columns=columns_mapper, ...)

  • (mapper, axis={'index', 'columns'}, ...)

We highly recommend using keyword arguments to clarify your intent.

Rename columns using a mapping:

>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
>>> df.rename(columns={"A": "a", "B": "c"})
   a  c
0  1  4
1  2  5
2  3  6

Rename index using a mapping:

>>> df.rename(index={0: "x", 1: "y", 2: "z"})
   A  B
x  1  4
y  2  5
z  3  6

Cast index labels to a different type:

>>> df.index
RangeIndex(start=0, stop=3, step=1)
>>> df.rename(index=str).index
Index(['0', '1', '2'], dtype='str')
>>> df.rename(columns={"A": "a", "B": "b", "C": "c"}, errors="raise")
Traceback (most recent call last):
KeyError: ['C'] not found in axis

Using axis-style parameters:

>>> df.rename(str.lower, axis="columns")
   a  b
0  1  4
1  2  5
2  3  6
>>> df.rename({1: 2, 2: 4}, axis="index")
   A  B
0  1  4
2  2  5
4  3  6
drop(labels=None, *, axis=0, index=None, columns=None, level=None, inplace: bool = False, errors: str = 'raise')

Drop specified labels from rows or columns.

Remove rows or columns by specifying label names and corresponding axis, or by directly specifying index or column names. When using a multi-index, labels on different levels can be removed by specifying the level. See the user guide for more information about the now unused levels.

Parameters

labelssingle label or iterable of labels

Index or column labels to drop. A tuple will be used as a single label and not treated as an iterable.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

Whether to drop labels from the index (0 or ‘index’) or columns (1 or ‘columns’).

indexsingle label or iterable of labels

Alternative to specifying axis (labels, axis=0 is equivalent to index=labels).

columnssingle label or iterable of labels

Alternative to specifying axis (labels, axis=1 is equivalent to columns=labels).

levelint or level name, optional

For MultiIndex, level from which the labels will be removed.

inplacebool, default False

If False, return a copy. Otherwise, do operation in place and return None.

errors{‘ignore’, ‘raise’}, default ‘raise’

If ‘ignore’, suppress error and only existing labels are dropped.

Returns

DataFrame or None

Returns DataFrame or None DataFrame with the specified index or column labels removed or None if inplace=True.

Raises

KeyError

If any of the labels is not found in the selected axis.

See Also

DataFrame.loc : Label-location based indexer for selection by label. DataFrame.dropna : Return DataFrame with labels on given axis omitted

where (all or any) data are missing.

DataFrame.drop_duplicatesReturn DataFrame with duplicate rows

removed, optionally only considering certain columns.

Series.drop : Return Series with specified index labels removed.

Examples

>>> df = pd.DataFrame(np.arange(12).reshape(3, 4), columns=["A", "B", "C", "D"])
>>> df
   A  B   C   D
0  0  1   2   3
1  4  5   6   7
2  8  9  10  11

Drop columns

>>> df.drop(["B", "C"], axis=1)
   A   D
0  0   3
1  4   7
2  8  11
>>> df.drop(columns=["B", "C"])
   A   D
0  0   3
1  4   7
2  8  11

Drop a row by index

>>> df.drop([0, 1])
   A  B   C   D
2  8  9  10  11

Drop columns and/or rows of MultiIndex DataFrame

>>> midx = pd.MultiIndex(
...     levels=[["llama", "cow", "falcon"], ["speed", "weight", "length"]],
...     codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2], [0, 1, 2, 0, 1, 2, 0, 1, 2]],
... )
>>> df = pd.DataFrame(
...     index=midx,
...     columns=["big", "small"],
...     data=[
...         [45, 30],
...         [200, 100],
...         [1.5, 1],
...         [30, 20],
...         [250, 150],
...         [1.5, 0.8],
...         [320, 250],
...         [1, 0.8],
...         [0.3, 0.2],
...     ],
... )
>>> df
                big     small
llama   speed   45.0    30.0
        weight  200.0   100.0
        length  1.5     1.0
cow     speed   30.0    20.0
        weight  250.0   150.0
        length  1.5     0.8
falcon  speed   320.0   250.0
        weight  1.0     0.8
        length  0.3     0.2

Drop a specific index combination from the MultiIndex DataFrame, i.e., drop the combination 'falcon' and 'weight', which deletes only the corresponding row

>>> df.drop(index=("falcon", "weight"))
                big     small
llama   speed   45.0    30.0
        weight  200.0   100.0
        length  1.5     1.0
cow     speed   30.0    20.0
        weight  250.0   150.0
        length  1.5     0.8
falcon  speed   320.0   250.0
        length  0.3     0.2
>>> df.drop(index="cow", columns="small")
                big
llama   speed   45.0
        weight  200.0
        length  1.5
falcon  speed   320.0
        weight  1.0
        length  0.3
>>> df.drop(index="length", level=1)
                big     small
llama   speed   45.0    30.0
        weight  200.0   100.0
cow     speed   30.0    20.0
        weight  250.0   150.0
falcon  speed   320.0   250.0
        weight  1.0     0.8
reset_index(level=None, *, drop: bool = False, inplace: bool = False, col_level=0, col_fill='', allow_duplicates=lib.no_default, names=None)

Reset the index, or a level of it.

Reset the index of the DataFrame, and use the default one instead. If the DataFrame has a MultiIndex, this method can remove one or more levels.

Parameters

levelint, str, tuple, or list, default None

Only remove the given levels from the index. Removes all levels by default.

dropbool, default False

Do not try to insert index into dataframe columns. This resets the index to the default integer index.

inplacebool, default False

Whether to modify the DataFrame rather than creating a new one.

col_levelint or str, default 0

If the columns have multiple levels, determines which level the labels are inserted into. By default it is inserted into the first level.

col_fillobject, default ‘’

If the columns have multiple levels, determines how the other levels are named. If None then the index name is repeated.

allow_duplicatesbool, optional, default lib.no_default

Allow duplicate column labels to be created.

namesint, str or 1-dimensional list, default None

Using the given string, rename the DataFrame column which contains the index data. If the DataFrame has a MultiIndex, this has to be a list with length equal to the number of levels.

Returns

DataFrame or None

DataFrame with the new index or None if inplace=True.

See Also

DataFrame.set_index : Opposite of reset_index. DataFrame.reindex : Change to new indices or expand indices. DataFrame.reindex_like : Change to same indices as other DataFrame.

Examples

>>> df = pd.DataFrame(
...     [("bird", 389.0), ("bird", 24.0), ("mammal", 80.5), ("mammal", np.nan)],
...     index=["falcon", "parrot", "lion", "monkey"],
...     columns=("class", "max_speed"),
... )
>>> df
         class  max_speed
falcon    bird      389.0
parrot    bird       24.0
lion    mammal       80.5
monkey  mammal        NaN

When we reset the index, the old index is added as a column, and a new sequential index is used:

>>> df.reset_index()
    index   class  max_speed
0  falcon    bird      389.0
1  parrot    bird       24.0
2    lion  mammal       80.5
3  monkey  mammal        NaN

We can use the drop parameter to avoid the old index being added as a column:

>>> df.reset_index(drop=True)
    class  max_speed
0    bird      389.0
1    bird       24.0
2  mammal       80.5
3  mammal        NaN

You can also use reset_index with MultiIndex.

>>> index = pd.MultiIndex.from_tuples(
...     [
...         ("bird", "falcon"),
...         ("bird", "parrot"),
...         ("mammal", "lion"),
...         ("mammal", "monkey"),
...     ],
...     names=["class", "name"],
... )
>>> columns = pd.MultiIndex.from_tuples([("speed", "max"), ("species", "type")])
>>> df = pd.DataFrame(
...     [(389.0, "fly"), (24.0, "fly"), (80.5, "run"), (np.nan, "jump")],
...     index=index,
...     columns=columns,
... )
>>> df
               speed species
                 max    type
class  name
bird   falcon  389.0     fly
       parrot   24.0     fly
mammal lion     80.5     run
       monkey    NaN    jump

Using the names parameter, choose a name for the index column:

>>> df.reset_index(names=["classes", "names"])
  classes   names  speed species
                     max    type
0    bird  falcon  389.0     fly
1    bird  parrot   24.0     fly
2  mammal    lion   80.5     run
3  mammal  monkey    NaN    jump

If the index has multiple levels, we can reset a subset of them:

>>> df.reset_index(level="class")
         class  speed species
                  max    type
name
falcon    bird  389.0     fly
parrot    bird   24.0     fly
lion    mammal   80.5     run
monkey  mammal    NaN    jump

If we are not dropping the index, by default, it is placed in the top level. We can place it in another level:

>>> df.reset_index(level="class", col_level=1)
                speed species
         class    max    type
name
falcon    bird  389.0     fly
parrot    bird   24.0     fly
lion    mammal   80.5     run
monkey  mammal    NaN    jump

When the index is inserted under another level, we can specify under which one with the parameter col_fill:

>>> df.reset_index(level="class", col_level=1, col_fill="species")
              species  speed species
                class    max    type
name
falcon           bird  389.0     fly
parrot           bird   24.0     fly
lion           mammal   80.5     run
monkey         mammal    NaN    jump

If we specify a nonexistent level for col_fill, it is created:

>>> df.reset_index(level="class", col_level=1, col_fill="genus")
                genus  speed species
                class    max    type
name
falcon           bird  389.0     fly
parrot           bird   24.0     fly
lion           mammal   80.5     run
monkey         mammal    NaN    jump
set_index(keys, *, drop: bool = True, append: bool = False, inplace: bool = False, verify_integrity=lib.no_default)

Set the DataFrame index using existing columns.

Set the DataFrame index (row labels) using one or more existing columns or arrays (of the correct length). The index can replace the existing index or expand on it.

Parameters

keyslabel or array-like or list of labels/arrays

This parameter can be either a single column key, a single array of the same length as the calling DataFrame, or a list containing an arbitrary combination of column keys and arrays. Here, “array” encompasses Series, Index, np.ndarray, and instances of Iterator.

dropbool, default True

Delete columns to be used as the new index.

appendbool, default False

Whether to append columns to existing index. Setting to True will add the new columns to existing index. When set to False, the current index will be dropped from the DataFrame.

inplacebool, default False

Whether to modify the DataFrame rather than creating a new one.

verify_integritybool, default False

Check the new index for duplicates. Otherwise defer the check until necessary. Setting to False will improve the performance of this method.

Deprecated since version 3.0.0.

Returns

DataFrame or None

Changed row labels or None if inplace=True.

See Also

DataFrame.reset_index : Opposite of set_index. DataFrame.reindex : Change to new indices or expand indices. DataFrame.reindex_like : Change to same indices as other DataFrame.

Examples

>>> df = pd.DataFrame(
...     {
...         "month": [1, 4, 7, 10],
...         "year": [2012, 2014, 2013, 2014],
...         "sale": [55, 40, 84, 31],
...     }
... )
>>> df
   month  year  sale
0      1  2012    55
1      4  2014    40
2      7  2013    84
3     10  2014    31

Set the index to become the ‘month’ column:

>>> df.set_index("month")
       year  sale
month
1      2012    55
4      2014    40
7      2013    84
10     2014    31

Create a MultiIndex using columns ‘year’ and ‘month’:

>>> df.set_index(["year", "month"])
            sale
year  month
2012  1     55
2014  4     40
2013  7     84
2014  10    31

Create a MultiIndex using an Index and a column:

>>> df.set_index([pd.Index([1, 2, 3, 4]), "year"])
         month  sale
   year
1  2012  1      55
2  2014  4      40
3  2013  7      84
4  2014  10     31

Create a MultiIndex using two Series:

>>> s = pd.Series([1, 2, 3, 4])
>>> df.set_index([s, s**2])
      month  year  sale
1 1       1  2012    55
2 4       4  2014    40
3 9       7  2013    84
4 16     10  2014    31

Append a column to the existing index:

>>> df = df.set_index("month")
>>> df.set_index("year", append=True)
              sale
month  year
1      2012    55
4      2014    40
7      2013    84
10     2014    31
>>> df.set_index("year", append=False)
       sale
year
2012    55
2014    40
2013    84
2014    31
reindex(labels=None, *, index=None, columns=None, axis=None, method=None, copy=lib.no_default, level=None, fill_value=np.nan, limit=None, tolerance=None) GeoDataFrame

Conform DataFrame to new index with optional filling logic.

Places NA/NaN in locations having no value in the previous index. A new object is produced unless the new index is equivalent to the current one and copy=False.

Parameters

labelsarray-like, optional

New labels / index to conform the axis specified by ‘axis’ to.

indexarray-like, optional

New labels for the index. Preferably an Index object to avoid duplicating data.

columnsarray-like, optional

New labels for the columns. Preferably an Index object to avoid duplicating data.

axisint or str, optional

Axis to target. Can be either the axis name (‘index’, ‘columns’) or number (0, 1).

method{None, ‘backfill’/’bfill’, ‘pad’/’ffill’, ‘nearest’}

Method to use for filling holes in reindexed DataFrame. Please note: this is only applicable to DataFrames/Series with a monotonically increasing/decreasing index.

  • None (default): don’t fill gaps

  • pad / ffill: Propagate last valid observation forward to next valid.

  • backfill / bfill: Use next valid observation to fill gap.

  • nearest: Use nearest valid observations to fill gap.

copybool, default False

This keyword is now ignored; changing its value will have no impact on the method.

Deprecated since version 3.0.0: This keyword is ignored and will be removed in pandas 4.0. Since pandas 3.0, this method always returns a new object using a lazy copy mechanism that defers copies until necessary (Copy-on-Write). See the user guide on Copy-on-Write for more details.

levelint or name

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuescalar, default np.nan

Value to use for missing values. Defaults to NaN, but can be any “compatible” value.

limitint, default None

Maximum number of consecutive elements to forward or backward fill.

toleranceoptional

Maximum distance between original and new labels for inexact matches. The values of the index at the matching locations most satisfy the equation abs(index[indexer] - target) <= tolerance.

Tolerance may be a scalar value, which applies the same tolerance to all values, or list-like, which applies variable tolerance per element. List-like includes list, tuple, array, Series, and must be the same size as the index and its dtype must exactly match the index’s type.

Returns

DataFrame

DataFrame with changed index.

See Also

DataFrame.set_index : Set row labels. DataFrame.reset_index : Remove row labels or move them to new columns. DataFrame.reindex_like : Change to same indices as other DataFrame.

Examples

DataFrame.reindex supports two calling conventions

  • (index=index_labels, columns=column_labels, ...)

  • (labels, axis={'index', 'columns'}, ...)

We highly recommend using keyword arguments to clarify your intent.

Create a DataFrame with some fictional data.

>>> index = ["Firefox", "Chrome", "Safari", "IE10", "Konqueror"]
>>> columns = ["http_status", "response_time"]
>>> df = pd.DataFrame(
...     [[200, 0.04], [200, 0.02], [404, 0.07], [404, 0.08], [301, 1.0]],
...     columns=columns,
...     index=index,
... )
>>> df
           http_status  response_time
Firefox            200           0.04
Chrome             200           0.02
Safari             404           0.07
IE10               404           0.08
Konqueror          301           1.00

Create a new index and reindex the DataFrame. By default values in the new index that do not have corresponding records in the DataFrame are assigned NaN.

>>> new_index = ["Safari", "Iceweasel", "Comodo Dragon", "IE10", "Chrome"]
>>> df.reindex(new_index)
               http_status  response_time
Safari               404.0           0.07
Iceweasel              NaN            NaN
Comodo Dragon          NaN            NaN
IE10                 404.0           0.08
Chrome               200.0           0.02

We can fill in the missing values by passing a value to the keyword fill_value. Because the index is not monotonically increasing or decreasing, we cannot use arguments to the keyword method to fill the NaN values.

>>> df.reindex(new_index, fill_value=0)
               http_status  response_time
Safari                 404           0.07
Iceweasel                0           0.00
Comodo Dragon            0           0.00
IE10                   404           0.08
Chrome                 200           0.02
>>> df.reindex(new_index, fill_value="missing")
              http_status response_time
Safari                404          0.07
Iceweasel         missing       missing
Comodo Dragon     missing       missing
IE10                  404          0.08
Chrome                200          0.02

We can also reindex the columns.

>>> df.reindex(columns=["http_status", "user_agent"])
           http_status  user_agent
Firefox            200         NaN
Chrome             200         NaN
Safari             404         NaN
IE10               404         NaN
Konqueror          301         NaN

Or we can use “axis-style” keyword arguments

>>> df.reindex(["http_status", "user_agent"], axis="columns")
           http_status  user_agent
Firefox            200         NaN
Chrome             200         NaN
Safari             404         NaN
IE10               404         NaN
Konqueror          301         NaN

To further illustrate the filling functionality in reindex, we will create a DataFrame with a monotonically increasing index (for example, a sequence of dates).

>>> date_index = pd.date_range("1/1/2010", periods=6, freq="D")
>>> df2 = pd.DataFrame(
...     {"prices": [100, 101, np.nan, 100, 89, 88]}, index=date_index
... )
>>> df2
            prices
2010-01-01   100.0
2010-01-02   101.0
2010-01-03     NaN
2010-01-04   100.0
2010-01-05    89.0
2010-01-06    88.0

Suppose we decide to expand the DataFrame to cover a wider date range.

>>> date_index2 = pd.date_range("12/29/2009", periods=10, freq="D")
>>> df2.reindex(date_index2)
            prices
2009-12-29     NaN
2009-12-30     NaN
2009-12-31     NaN
2010-01-01   100.0
2010-01-02   101.0
2010-01-03     NaN
2010-01-04   100.0
2010-01-05    89.0
2010-01-06    88.0
2010-01-07     NaN

The index entries that did not have a value in the original data frame (for example, ‘2009-12-29’) are by default filled with NaN. If desired, we can fill in the missing values using one of several options.

For example, to back-propagate the last valid value to fill the NaN values, pass bfill as an argument to the method keyword.

>>> df2.reindex(date_index2, method="bfill")
            prices
2009-12-29   100.0
2009-12-30   100.0
2009-12-31   100.0
2010-01-01   100.0
2010-01-02   101.0
2010-01-03     NaN
2010-01-04   100.0
2010-01-05    89.0
2010-01-06    88.0
2010-01-07     NaN

Please note that the NaN value present in the original DataFrame (at index value 2010-01-03) will not be filled by any of the value propagation schemes. This is because filling while reindexing does not look at DataFrame values, but only compares the original and desired indexes. If you do want to fill in the NaN values present in the original DataFrame, use the fillna() method.

See the user guide for more.

reindex_like(other, method=None, copy=lib.no_default, limit=None, tolerance=None) GeoDataFrame

Return an object with matching indices as other object.

Conform the object to the same index on all axes. Optional filling logic, placing NaN in locations having no value in the previous index. A new object is produced unless the new index is equivalent to the current one and copy=False.

Parameters

otherObject of the same data type

Its row and column indices are used to define the new indices of this object.

method{None, ‘backfill’/’bfill’, ‘pad’/’ffill’, ‘nearest’}

Method to use for filling holes in reindexed DataFrame. Please note: this is only applicable to DataFrames/Series with a monotonically increasing/decreasing index.

Deprecated since version 3.0.0.

  • None (default): don’t fill gaps

  • pad / ffill: propagate last valid observation forward to next valid

  • backfill / bfill: use next valid observation to fill gap

  • nearest: use nearest valid observations to fill gap.

copybool, default False

This keyword is now ignored; changing its value will have no impact on the method.

Deprecated since version 3.0.0: This keyword is ignored and will be removed in pandas 4.0. Since pandas 3.0, this method always returns a new object using a lazy copy mechanism that defers copies until necessary (Copy-on-Write). See the user guide on Copy-on-Write for more details.

limitint, default None

Maximum number of consecutive labels to fill for inexact matches.

toleranceoptional

Maximum distance between original and new labels for inexact matches. The values of the index at the matching locations must satisfy the equation abs(index[indexer] - target) <= tolerance.

Tolerance may be a scalar value, which applies the same tolerance to all values, or list-like, which applies variable tolerance per element. List-like includes list, tuple, array, Series, and must be the same size as the index and its dtype must exactly match the index’s type.

Returns

Series or DataFrame

Same type as caller, but with changed indices on each axis.

See Also

DataFrame.set_index : Set row labels. DataFrame.reset_index : Remove row labels or move them to new columns. DataFrame.reindex : Change to new indices or expand indices.

Notes

Same as calling .reindex(index=other.index, columns=other.columns,...).

Examples

>>> df1 = pd.DataFrame(
...     [
...         [24.3, 75.7, "high"],
...         [31, 87.8, "high"],
...         [22, 71.6, "medium"],
...         [35, 95, "medium"],
...     ],
...     columns=["temp_celsius", "temp_fahrenheit", "windspeed"],
...     index=pd.date_range(start="2014-02-12", end="2014-02-15", freq="D"),
... )
>>> df1
            temp_celsius  temp_fahrenheit windspeed
2014-02-12          24.3             75.7      high
2014-02-13          31.0             87.8      high
2014-02-14          22.0             71.6    medium
2014-02-15          35.0             95.0    medium
>>> df2 = pd.DataFrame(
...     [[28, "low"], [30, "low"], [35.1, "medium"]],
...     columns=["temp_celsius", "windspeed"],
...     index=pd.DatetimeIndex(["2014-02-12", "2014-02-13", "2014-02-15"]),
... )
>>> df2
            temp_celsius windspeed
2014-02-12          28.0       low
2014-02-13          30.0       low
2014-02-15          35.1    medium
>>> df2.reindex_like(df1)
            temp_celsius  temp_fahrenheit windspeed
2014-02-12          28.0              NaN       low
2014-02-13          30.0              NaN       low
2014-02-14           NaN              NaN       NaN
2014-02-15          35.1              NaN    medium
filter(items=None, like: str | None = None, regex: str | None = None, axis=None)

Subset the DataFrame or Series according to the specified index labels.

For DataFrame, filter rows or columns depending on axis argument. Note that this routine does not filter based on content. The filter is applied to the labels of the index.

Parameters

itemslist-like

Keep labels from axis which are in items.

likestr

Keep labels from axis for which “like in label == True”.

regexstr (regular expression)

Keep labels from axis for which re.search(regex, label) == True.

axis{0 or ‘index’, 1 or ‘columns’, None}, default None

The axis to filter on, expressed either as an index (int) or axis name (str). By default this is the info axis, ‘columns’ for DataFrame. For Series this parameter is unused and defaults to None.

Returns

Same type as caller

The filtered subset of the DataFrame or Series.

See Also

DataFrame.locAccess a group of rows and columns

by label(s) or a boolean array.

Notes

The items, like, and regex parameters are enforced to be mutually exclusive.

axis defaults to the info axis that is used when indexing with [].

Examples

>>> df = pd.DataFrame(
...     np.array(([1, 2, 3], [4, 5, 6])),
...     index=["mouse", "rabbit"],
...     columns=["one", "two", "three"],
... )
>>> df
        one  two  three
mouse     1    2      3
rabbit    4    5      6
>>> # select columns by name
>>> df.filter(items=["one", "three"])
         one  three
mouse     1      3
rabbit    4      6
>>> # select columns by regular expression
>>> df.filter(regex="e$", axis=1)
         one  three
mouse     1      3
rabbit    4      6
>>> # select rows containing 'bbi'
>>> df.filter(like="bbi", axis=0)
         one  two  three
rabbit    4    5      6
assign(**kwargs) GeoDataFrame

Assign new columns to a DataFrame.

Returns a new object with all original columns in addition to new ones. Existing columns that are re-assigned will be overwritten.

Parameters

**kwargscallable or Series

The column names are keywords. If the values are callable, they are computed on the DataFrame and assigned to the new columns. The callable must not change input DataFrame (though pandas doesn’t check it). If the values are not callable, (e.g. a Series, scalar, or array), they are simply assigned.

Returns

DataFrame

A new DataFrame with the new columns in addition to all the existing columns.

See Also

DataFrame.loc : Select a subset of a DataFrame by labels. DataFrame.iloc : Select a subset of a DataFrame by positions.

Notes

Assigning multiple columns within the same assign is possible. Later items in ‘**kwargs’ may refer to newly created or modified columns in ‘df’; items are computed and assigned into ‘df’ in order.

Examples

>>> df = pd.DataFrame({"temp_c": [17.0, 25.0]}, index=["Portland", "Berkeley"])
>>> df
          temp_c
Portland    17.0
Berkeley    25.0

Where the value is a callable, evaluated on df:

>>> df.assign(temp_f=lambda x: x.temp_c * 9 / 5 + 32)
          temp_c  temp_f
Portland    17.0    62.6
Berkeley    25.0    77.0

Alternatively, the same behavior can be achieved by directly referencing an existing Series or sequence:

>>> df.assign(temp_f=df["temp_c"] * 9 / 5 + 32)
          temp_c  temp_f
Portland    17.0    62.6
Berkeley    25.0    77.0

or by using pandas.col():

>>> df.assign(temp_f=pd.col("temp_c") * 9 / 5 + 32)
          temp_c  temp_f
Portland    17.0    62.6
Berkeley    25.0    77.0

You can create multiple columns within the same assign where one of the columns depends on another one defined within the same assign:

>>> df.assign(
...     temp_f=lambda x: x["temp_c"] * 9 / 5 + 32,
...     temp_k=lambda x: (x["temp_f"] + 459.67) * 5 / 9,
... )
          temp_c  temp_f  temp_k
Portland    17.0    62.6  290.15
Berkeley    25.0    77.0  298.15
take(indices, axis=0, **kwargs) GeoDataFrame

Return the elements in the given positional indices along an axis.

This means that we are not indexing according to actual values in the index attribute of the object. We are indexing according to the actual position of the element in the object.

Parameters

indicesarray-like

An array of ints indicating which positions to take.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis on which to select elements. 0 means that we are selecting rows, 1 means that we are selecting columns. For Series this parameter is unused and defaults to 0.

**kwargs

For compatibility with numpy.take(). Has no effect on the output.

Returns

same type as caller

An array-like containing the elements taken from the object.

See Also

DataFrame.loc : Select a subset of a DataFrame by labels. DataFrame.iloc : Select a subset of a DataFrame by positions. numpy.take : Take elements from an array along an axis.

Examples

>>> df = pd.DataFrame(
...     [
...         ("falcon", "bird", 389.0),
...         ("parrot", "bird", 24.0),
...         ("lion", "mammal", 80.5),
...         ("monkey", "mammal", np.nan),
...     ],
...     columns=["name", "class", "max_speed"],
...     index=[0, 2, 3, 1],
... )
>>> df
     name   class  max_speed
0  falcon    bird      389.0
2  parrot    bird       24.0
3    lion  mammal       80.5
1  monkey  mammal        NaN

Take elements at positions 0 and 3 along the axis 0 (default).

Note how the actual indices selected (0 and 1) do not correspond to our selected indices 0 and 3. That’s because we are selecting the 0th and 3rd rows, not rows whose indices equal 0 and 3.

>>> df.take([0, 3])
     name   class  max_speed
0  falcon    bird      389.0
1  monkey  mammal        NaN

Take elements at indices 1 and 2 along the axis 1 (column selection).

>>> df.take([1, 2], axis=1)
    class  max_speed
0    bird      389.0
2    bird       24.0
3  mammal       80.5
1  mammal        NaN

We may take elements using negative integers for positive indices, starting from the end of the object, just like with Python lists.

>>> df.take([-1, -2])
     name   class  max_speed
1  monkey  mammal        NaN
3    lion  mammal       80.5
copy(deep: bool = True) GeoDataFrame

Make a copy of this object’s indices and data.

When deep=True (default), a new object will be created with a copy of the calling object’s data and indices. Modifications to the data or indices of the copy will not be reflected in the original object (see notes below).

When deep=False, a new object will be created without copying the calling object’s data or index (only references to the data and index are copied). With Copy-on-Write, changes to the original will not be reflected in the shallow copy (and vice versa). The shallow copy uses a lazy (deferred) copy mechanism that copies the data only when any changes to the original or shallow copy are made, ensuring memory efficiency while maintaining data integrity.

Note

In pandas versions prior to 3.0, the default behavior without Copy-on-Write was different: changes to the original were reflected in the shallow copy (and vice versa). See the Copy-on-Write user guide for more information.

Parameters

deepbool, default True

Make a deep copy, including a copy of the data and the indices. With deep=False neither the indices nor the data are copied.

Returns

Series or DataFrame

Object type matches caller.

See Also

copy.copy : Return a shallow copy of an object. copy.deepcopy : Return a deep copy of an object.

Notes

When deep=True, data is copied but actual Python objects will not be copied recursively, only the reference to the object. This is in contrast to copy.deepcopy in the Standard Library, which recursively copies object data (see examples below).

While Index objects are copied when deep=True, the underlying numpy array is not copied for performance reasons. Since Index is immutable, the underlying data can be safely shared and a copy is not needed.

Since pandas is not thread safe, see the gotchas when copying in a threading environment.

Copy-on-Write protects shallow copies against accidental modifications. This means that any changes to the copied data would make a new copy of the data upon write (and vice versa). Changes made to either the original or copied variable would not be reflected in the counterpart. See Copy_on_Write for more information.

Examples

>>> s = pd.Series([1, 2], index=["a", "b"])
>>> s
a    1
b    2
dtype: int64
>>> s_copy = s.copy(deep=True)
>>> s_copy
a    1
b    2
dtype: int64

Due to Copy-on-Write, shallow copies still protect data modifications. Note shallow does not get modified below.

>>> s = pd.Series([1, 2], index=["a", "b"])
>>> shallow = s.copy(deep=False)
>>> s.iloc[1] = 200
>>> shallow
a    1
b    2
dtype: int64

When the data has object dtype, even a deep copy does not copy the underlying Python objects. Updating a nested data object will be reflected in the deep copy.

>>> s = pd.Series([[1, 2], [3, 4]])
>>> deep = s.copy()
>>> s[0][0] = 10
>>> s
0    [10, 2]
1     [3, 4]
dtype: object
>>> deep
0    [10, 2]
1     [3, 4]
dtype: object
sort_values(by, *, axis=0, ascending=True, inplace: bool = False, kind: str = 'quicksort', na_position: str = 'last', ignore_index: bool = False, key=None)

Sort by the values along either axis.

Parameters

bystr or list of str

Name or list of names to sort by.

  • if axis is 0 or ‘index’ then by may contain index levels and/or column labels.

  • if axis is 1 or ‘columns’ then by may contain column levels and/or index labels.

axis“{0 or ‘index’, 1 or ‘columns’}”, default 0

Axis to be sorted.

ascendingbool or list of bool, default True

Sort ascending vs. descending. Specify list for multiple sort orders. If this is a list of bools, must match the length of the by.

inplacebool, default False

If True, perform operation in-place.

kind{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, default ‘quicksort’

Choice of sorting algorithm. See also numpy.sort() for more information. mergesort and stable are the only stable algorithms. For DataFrames, this option is only applied when sorting on a single column or label.

na_position{‘first’, ‘last’}, default ‘last’

Puts NaNs at the beginning if first; last puts NaNs at the end.

ignore_indexbool, default False

If True, the resulting axis will be labeled 0, 1, …, n - 1.

keycallable, optional

Apply the key function to the values before sorting. This is similar to the key argument in the builtin sorted() function, with the notable difference that this key function should be vectorized. It should expect a Series and return a Series with the same shape as the input. It will be applied to each column in by independently. The values in the returned Series will be used as the keys for sorting.

Returns

DataFrame or None

DataFrame with sorted values or None if inplace=True.

See Also

DataFrame.sort_index : Sort a DataFrame by the index. Series.sort_values : Similar method for a Series.

Examples

>>> df = pd.DataFrame(
...     {
...         "col1": ["A", "A", "B", np.nan, "D", "C"],
...         "col2": [2, 1, 9, 8, 7, 4],
...         "col3": [0, 1, 9, 4, 2, 3],
...         "col4": ["a", "B", "c", "D", "e", "F"],
...     }
... )
>>> df
  col1  col2  col3 col4
0    A     2     0    a
1    A     1     1    B
2    B     9     9    c
3  NaN     8     4    D
4    D     7     2    e
5    C     4     3    F

Sort by a single column

In this case, we are sorting the rows according to values in col1:

>>> df.sort_values(by=["col1"])
  col1  col2  col3 col4
0    A     2     0    a
1    A     1     1    B
2    B     9     9    c
5    C     4     3    F
4    D     7     2    e
3  NaN     8     4    D

Sort by multiple columns

You can also provide multiple columns to by argument, as shown below. In this example, the rows are first sorted according to col1, and then the rows that have an identical value in col1 are sorted according to col2.

>>> df.sort_values(by=["col1", "col2"])
  col1  col2  col3 col4
1    A     1     1    B
0    A     2     0    a
2    B     9     9    c
5    C     4     3    F
4    D     7     2    e
3  NaN     8     4    D

Sort in a descending order

The sort order can be reversed using ascending argument, as shown below:

>>> df.sort_values(by="col1", ascending=False)
  col1  col2  col3 col4
4    D     7     2    e
5    C     4     3    F
2    B     9     9    c
0    A     2     0    a
1    A     1     1    B
3  NaN     8     4    D

Placing any NA first

Note that in the above example, the rows that contain an NA value in their col1 are placed at the end of the dataframe. This behavior can be modified via na_position argument, as shown below:

>>> df.sort_values(by="col1", ascending=False, na_position="first")
  col1  col2  col3 col4
3  NaN     8     4    D
4    D     7     2    e
5    C     4     3    F
2    B     9     9    c
0    A     2     0    a
1    A     1     1    B

Customized sort order

The key argument allows for a further customization of sorting behaviour. For example, you may want to ignore the letter’s case when sorting strings:

>>> df.sort_values(by="col4", key=lambda col: col.str.lower())
   col1  col2  col3 col4
0    A     2     0    a
1    A     1     1    B
2    B     9     9    c
3  NaN     8     4    D
4    D     7     2    e
5    C     4     3    F

Another typical example is natural sorting. This can be done using natsort package, which provides a function to generate a key to sort data in their natural order:

>>> df = pd.DataFrame(
...     {
...         "hours": ["0hr", "128hr", "0hr", "64hr", "64hr", "128hr"],
...         "mins": [
...             "10mins",
...             "40mins",
...             "40mins",
...             "40mins",
...             "10mins",
...             "10mins",
...         ],
...         "value": [10, 20, 30, 40, 50, 60],
...     }
... )
>>> df
   hours    mins  value
0    0hr  10mins     10
1  128hr  40mins     20
2    0hr  40mins     30
3   64hr  40mins     40
4   64hr  10mins     50
5  128hr  10mins     60
>>> from natsort import natsort_keygen
>>> df.sort_values(
...     by=["hours", "mins"],
...     key=natsort_keygen(),
... )
   hours    mins  value
0    0hr  10mins     10
2    0hr  40mins     30
4   64hr  10mins     50
3   64hr  40mins     40
5  128hr  10mins     60
1  128hr  40mins     20
sort_index(*, axis=0, level=None, ascending=True, inplace: bool = False, kind: str = 'quicksort', na_position: str = 'last', sort_remaining: bool = True, ignore_index: bool = False, key=None)

Sort object by labels (along an axis).

Returns a new DataFrame sorted by label if inplace argument is False, otherwise updates the original DataFrame and returns None.

Parameters

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis along which to sort. The value 0 identifies the rows, and 1 identifies the columns.

levelint or level name or list of ints or list of level names

If not None, sort on values in specified index level(s).

ascendingbool or list-like of bools, default True

Sort ascending vs. descending. When the index is a MultiIndex the sort direction can be controlled for each level individually.

inplacebool, default False

Whether to modify the DataFrame rather than creating a new one.

kind{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, default ‘quicksort’

Choice of sorting algorithm. See also numpy.sort() for more information. mergesort and stable are the only stable algorithms. For DataFrames, this option is only applied when sorting on a single column or label.

na_position{‘first’, ‘last’}, default ‘last’

Puts NaNs at the beginning if first; last puts NaNs at the end. Not implemented for MultiIndex.

sort_remainingbool, default True

If True and sorting by level and index is multilevel, sort by other levels too (in order) after sorting by specified level.

ignore_indexbool, default False

If True, the resulting axis will be labeled 0, 1, …, n - 1.

keycallable, optional

If not None, apply the key function to the index values before sorting. This is similar to the key argument in the builtin sorted() function, with the notable difference that this key function should be vectorized. It should expect an Index and return an Index of the same shape. For MultiIndex inputs, the key is applied per level.

Returns

DataFrame or None

The original DataFrame sorted by the labels or None if inplace=True.

See Also

Series.sort_index : Sort Series by the index. DataFrame.sort_values : Sort DataFrame by the value. Series.sort_values : Sort Series by the value.

Examples

>>> df = pd.DataFrame(
...     [1, 2, 3, 4, 5], index=[100, 29, 234, 1, 150], columns=["A"]
... )
>>> df.sort_index()
     A
1    4
29   2
100  1
150  5
234  3

By default, it sorts in ascending order, to sort in descending order, use ascending=False

>>> df.sort_index(ascending=False)
     A
234  3
150  5
100  1
29   2
1    4

A key function can be specified which is applied to the index before sorting. For a MultiIndex this is applied to each level separately.

>>> df = pd.DataFrame({"a": [1, 2, 3, 4]}, index=["A", "b", "C", "d"])
>>> df.sort_index(key=lambda x: x.str.lower())
   a
A  1
b  2
C  3
d  4
apply(func, axis=0, raw: bool = False, result_type=None, args=(), **kwargs)

Apply a function along an axis of the DataFrame.

Objects passed to the function are Series objects whose index is either the DataFrame’s index (axis=0) or the DataFrame’s columns (axis=1). By default (result_type=None), the final return type is inferred from the return type of the applied function. Otherwise, it depends on the result_type argument. The return type of the applied function is inferred based on the first computed result obtained after applying the function to a Series object.

Parameters

funcfunction

Function to apply to each column or row.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

Axis along which the function is applied:

  • 0 or ‘index’: apply function to each column.

  • 1 or ‘columns’: apply function to each row.

rawbool, default False

Determines if row or column is passed as a Series or ndarray object:

  • False : passes each row or column as a Series to the function.

  • True : the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance.

Note

When raw=True, the result dtype is inferred from the first returned value.

result_type{‘expand’, ‘reduce’, ‘broadcast’, None}, default None

These only act when axis=1 (columns):

  • ‘expand’ : list-like results will be turned into columns.

  • ‘reduce’ : returns a Series if possible rather than expanding list-like results. This is the opposite of ‘expand’.

  • ‘broadcast’ : results will be broadcast to the original shape of the DataFrame, the original index and columns will be retained.

The default behaviour (None) depends on the return value of the applied function: list-like results will be returned as a Series of those. However if the apply function returns a Series these are expanded to columns.

argstuple

Positional arguments to pass to func in addition to the array/series.

by_rowFalse or “compat”, default “compat”

Only has an effect when func is a listlike or dictlike of funcs and the func isn’t a string. If “compat”, will if possible first translate the func into pandas methods (e.g. Series().apply(np.sum) will be translated to Series().sum()). If that doesn’t work, will try call to apply again with by_row=True and if that fails, will call apply again with by_row=False (backward compatible). If False, the funcs will be passed the whole Series at once.

Added in version 2.1.0.

enginedecorator or {‘python’, ‘numba’}, optional

Choose the execution engine to use. If not provided the function will be executed by the regular Python interpreter.

Other options include JIT compilers such Numba and Bodo, which in some cases can speed up the execution. To use an executor you can provide the decorators numba.jit, numba.njit or bodo.jit. You can also provide the decorator with parameters, like numba.jit(nogit=True).

Not all functions can be executed with all execution engines. In general, JIT compilers will require type stability in the function (no variable should change data type during the execution). And not all pandas and NumPy APIs are supported. Check the engine documentation [1] and [2] for limitations.

Warning

String parameters will stop being supported in a future pandas version.

Added in version 2.2.0.

engine_kwargsdict

Pass keyword arguments to the engine. This is currently only used by the numba engine, see the documentation for the engine argument for more information.

**kwargs

Additional keyword arguments to pass as keywords arguments to func.

Returns

Series or DataFrame

Result of applying func along the given axis of the DataFrame.

See Also

DataFrame.map: For elementwise operations. DataFrame.aggregate: Only perform aggregating type operations. DataFrame.transform: Only perform transforming type operations.

Notes

Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See gotchas.udf-mutation for more details.

References

Examples

>>> df = pd.DataFrame([[4, 9]] * 3, columns=["A", "B"])
>>> df
   A  B
0  4  9
1  4  9
2  4  9

Using a numpy universal function (in this case the same as np.sqrt(df)):

>>> df.apply(np.sqrt)
     A    B
0  2.0  3.0
1  2.0  3.0
2  2.0  3.0

Using a reducing function on either axis

>>> df.apply(np.sum, axis=0)
A    12
B    27
dtype: int64
>>> df.apply(np.sum, axis=1)
0    13
1    13
2    13
dtype: int64

Returning a list-like will result in a Series

>>> df.apply(lambda x: [1, 2], axis=1)
0    [1, 2]
1    [1, 2]
2    [1, 2]
dtype: object

Passing result_type='expand' will expand list-like results to columns of a Dataframe

>>> df.apply(lambda x: [1, 2], axis=1, result_type="expand")
   0  1
0  1  2
1  1  2
2  1  2

Returning a Series inside the function is similar to passing result_type='expand'. The resulting column names will be the Series index.

>>> df.apply(lambda x: pd.Series([1, 2], index=["foo", "bar"]), axis=1)
   foo  bar
0    1    2
1    1    2
2    1    2

Passing result_type='broadcast' will ensure the same shape result, whether list-like or scalar is returned by the function, and broadcast it along the axis. The resulting column names will be the originals.

>>> df.apply(lambda x: [1, 2], axis=1, result_type="broadcast")
   A  B
0  1  2
1  1  2
2  1  2

Advanced users can speed up their code by using a Just-in-time (JIT) compiler with apply. The main JIT compilers available for pandas are Numba and Bodo. In general, JIT compilation is only possible when the function passed to apply has type stability (variables in the function do not change their type during the execution).

>>> import bodo
>>> df.apply(lambda x: x.A + x.B, axis=1, engine=bodo.jit)

Note that JIT compilation is only recommended for functions that take a significant amount of time to run. Fast functions are unlikely to run faster with JIT compilation.

dissolve(by: str | None = None, aggfunc='first', as_index: bool = True, level=None, sort: bool = True, observed: bool = False, dropna: bool = True, method: Literal['unary', 'coverage', 'disjoint_subset'] = 'unary', grid_size: float | None = None, **kwargs) GeoDataFrame

Dissolve geometries within groupby into single observation. This is accomplished by applying the union_all method to all geometries within a groupself.

Observations associated with each groupby group will be aggregated using the aggfunc.

Parameters

bystr or list-like, default None

Column(s) whose values define the groups to be dissolved. If None, the entire GeoDataFrame is considered as a single group. If a list-like object is provided, the values in the list are treated as categorical labels, and polygons will be combined based on the equality of these categorical labels.

aggfuncfunction or string, default “first”

Aggregation function for manipulation of data associated with each group. Passed to pandas groupby.agg method. Accepted combinations are:

  • function

  • string function name

  • list of functions and/or function names, e.g. [np.sum, ‘mean’]

  • dict of axis labels -> functions, function names or list of such.

as_indexboolean, default True

If true, groupby columns become index of result.

levelint or str or sequence of int or sequence of str, default None

If the axis is a MultiIndex (hierarchical), group by a particular level or levels.

sortbool, default True

Sort group keys. Get better performance by turning this off. Note this does not influence the order of observations within each group. Groupby preserves the order of rows within each group.

observedbool, default False

This only applies if any of the groupers are Categoricals. If True: only show observed values for categorical groupers. If False: show all values for categorical groupers.

dropnabool, default True

If True, and if group keys contain NA values, NA values together with row/column will be dropped. If False, NA values will also be treated as the key in groups.

methodstr (default "unary")

The method to use for the union. Options are:

  • "unary": use the unary union algorithm. This option is the most robust but can be slow for large numbers of geometries (default).

  • "coverage": use the coverage union algorithm. This option is optimized for non-overlapping polygons and can be significantly faster than the unary union algorithm. However, it can produce invalid geometries if the polygons overlap.

  • "disjoint_subset:: use the disjoint subset union algorithm. This option is optimized for inputs that can be divided into subsets that do not intersect. If there is only one such subset, performance can be expected to be worse than "unary". Requires Shapely >= 2.1.

grid_sizefloat, default None

When grid size is specified, a fixed-precision space is used to perform the union operations. This can be useful when unioning geometries that are not perfectly snapped or to avoid geometries not being unioned because of robustness issues. The inputs are first snapped to a grid of the given size. When a line segment of a geometry is within tolerance off a vertex of another geometry, this vertex will be inserted in the line segment. Finally, the result vertices are computed on the same grid. Is only supported for method "unary". If None, the highest precision of the inputs will be used. Defaults to None.

Added in version 1.1.0.

**kwargs :

Keyword arguments to be passed to the pandas DataFrameGroupby.agg method which is used by dissolve. In particular, numeric_only may be supplied, which will be required in pandas 2.0 for certain aggfuncs.

Added in version 0.13.0.

Returns

GeoDataFrame

Examples

>>> from shapely.geometry import Point
>>> d = {
...     "col1": ["name1", "name2", "name1"],
...     "geometry": [Point(1, 2), Point(2, 1), Point(0, 1)],
... }
>>> gdf = geopandas.GeoDataFrame(d, crs=4326)
>>> gdf
    col1     geometry
0  name1  POINT (1 2)
1  name2  POINT (2 1)
2  name1  POINT (0 1)
>>> dissolved = gdf.dissolve('col1')
>>> dissolved
                        geometry
col1
name1  MULTIPOINT ((0 1), (1 2))
name2                POINT (2 1)

See Also

GeoDataFrame.explode : explode multi-part geometries into single geometries

dissolve_lazy(by: str | None = None, aggfunc='first', as_index: bool = True, level=None, sort: bool = True, observed: bool = False, dropna: bool = True, method: Literal['unary', 'coverage', 'disjoint_subset'] = 'unary', grid_size: float | None = None, **kwargs)

Build a predicate-first dissolve view with on-demand materialization.

explode(column: str | None = None, ignore_index: bool = False, index_parts: bool = False, **kwargs) GeoDataFrame | pandas.DataFrame

Explode multi-part geometries into multiple single geometries.

Each row containing a multi-part geometry will be split into multiple rows with single geometries, thereby increasing the vertical size of the GeoDataFrame.

Parameters

columnstring, default None

Column to explode. In the case of a geometry column, multi-part geometries are converted to single-part. If None, the active geometry column is used.

ignore_indexbool, default False

If True, the resulting index will be labelled 0, 1, …, n - 1, ignoring index_parts.

index_partsboolean, default False

If True, the resulting index will be a multi-index (original index with an additional level indicating the multiple geometries: a new zero-based index for each single part geometry per multi-part geometry).

Returns

GeoDataFrame

Exploded geodataframe with each single geometry as a separate entry in the geodataframe.

Examples

>>> from shapely.geometry import MultiPoint
>>> d = {
...     "col1": ["name1", "name2"],
...     "geometry": [
...         MultiPoint([(1, 2), (3, 4)]),
...         MultiPoint([(2, 1), (0, 0)]),
...     ],
... }
>>> gdf = geopandas.GeoDataFrame(d, crs=4326)
>>> gdf
    col1               geometry
0  name1  MULTIPOINT ((1 2), (3 4))
1  name2  MULTIPOINT ((2 1), (0 0))
>>> exploded = gdf.explode(index_parts=True)
>>> exploded
      col1     geometry
0 0  name1  POINT (1 2)
  1  name1  POINT (3 4)
1 0  name2  POINT (2 1)
  1  name2  POINT (0 0)
>>> exploded = gdf.explode(index_parts=False)
>>> exploded
    col1     geometry
0  name1  POINT (1 2)
0  name1  POINT (3 4)
1  name2  POINT (2 1)
1  name2  POINT (0 0)
>>> exploded = gdf.explode(ignore_index=True)
>>> exploded
    col1     geometry
0  name1  POINT (1 2)
1  name1  POINT (3 4)
2  name2  POINT (2 1)
3  name2  POINT (0 0)

See Also

GeoDataFrame.dissolve : dissolve geometries into a single observation.

to_postgis(name: str, con, schema: str | None = None, if_exists: Literal['fail', 'replace', 'append'] = 'fail', index: bool = False, index_label: collections.abc.Iterable[str] | str | None = None, chunksize: int | None = None, dtype=None) None

Upload GeoDataFrame into PostGIS database.

This method requires SQLAlchemy and GeoAlchemy2, and a PostgreSQL Python driver (psycopg or psycopg2) to be installed.

It is also possible to use to_file() to write to a database. Especially for file geodatabases like GeoPackage or SpatiaLite this can be easier.

Parameters

namestr

Name of the target table.

consqlalchemy.engine.Connection or sqlalchemy.engine.Engine

Active connection to the PostGIS database.

if_exists{‘fail’, ‘replace’, ‘append’}, default ‘fail’

How to behave if the table already exists:

  • fail: Raise a ValueError.

  • replace: Drop the table before inserting new values.

  • append: Insert new values to the existing table.

schemastring, optional

Specify the schema. If None, use default schema: ‘public’.

indexbool, default False

Write DataFrame index as a column. Uses index_label as the column name in the table.

index_labelstring or sequence, default None

Column label for index column(s). If None is given (default) and index is True, then the index names are used.

chunksizeint, optional

Rows will be written in batches of this size at a time. By default, all rows will be written at once.

dtypedict of column name to SQL type, default None

Specifying the datatype for columns. The keys should be the column names and the values should be the SQLAlchemy types.

Examples

>>> from sqlalchemy import create_engine
>>> engine = create_engine("postgresql://myusername:mypassword@myhost:5432/mydatabase")
>>> gdf.to_postgis("my_table", engine)

See Also

GeoDataFrame.to_file : write GeoDataFrame to file read_postgis : read PostGIS database to GeoDataFrame

plot
explore(*args, **kwargs) folium.Map
sjoin(df: GeoDataFrame, how: Literal['left', 'right', 'inner', 'outer'] = 'inner', predicate: str = 'intersects', lsuffix: str = 'left', rsuffix: str = 'right', **kwargs) GeoDataFrame

Spatial join of two GeoDataFrames.

See the User Guide page ../../user_guide/mergingdata for details.

Parameters

df : GeoDataFrame how : string, default ‘inner’

The type of join:

  • ‘left’: use keys from left_df; retain only left_df geometry column

  • ‘right’: use keys from right_df; retain only right_df geometry column

  • ‘inner’: use intersection of keys from both dfs; retain only left_df geometry column

  • ‘outer’: use union of keys from both dfs; retain a single active geometry column by preferring left geometries and filling unmatched right-only rows from the right geometry column

predicatestring, default ‘intersects’

Binary predicate. Valid values are determined by the spatial index used. You can check the valid values in left_df or right_df as left_df.sindex.valid_query_predicates or right_df.sindex.valid_query_predicates

Available predicates include:

  • 'intersects': True if geometries intersect (boundaries and interiors)

  • 'within': True if left geometry is completely within right geometry

  • 'contains': True if left geometry completely contains right geometry

  • 'contains_properly': True if left geometry contains right geometry and their boundaries do not touch

  • 'overlaps': True if geometries overlap but neither contains the other

  • 'crosses': True if geometries cross (interiors intersect but neither contains the other, with intersection dimension less than max dimension)

  • 'touches': True if geometries touch at boundaries but interiors don’t

  • 'covers': True if left geometry covers right geometry (every point of right is a point of left)

  • 'covered_by': True if left geometry is covered by right geometry

  • 'dwithin': True if geometries are within specified distance (requires distance parameter)

lsuffixstring, default ‘left’

Suffix to apply to overlapping column names (left GeoDataFrame).

rsuffixstring, default ‘right’

Suffix to apply to overlapping column names (right GeoDataFrame).

distancenumber or array_like, optional

Distance(s) around each input geometry within which to query the tree for the ‘dwithin’ predicate. If array_like, must be one-dimesional with length equal to length of left GeoDataFrame. Required if predicate='dwithin'.

on_attributestring, list or tuple

Column name(s) to join on as an additional join restriction on top of the spatial predicate. These must be found in both DataFrames. If set, observations are joined only if the predicate applies and values in specified columns match.

Examples

>>> import geodatasets
>>> chicago = geopandas.read_file(
...     geodatasets.get_path("geoda.chicago_commpop")
... )
>>> groceries = geopandas.read_file(
...     geodatasets.get_path("geoda.groceries")
... ).to_crs(chicago.crs)
>>> chicago.head()
         community  ...                                           geometry
0          DOUGLAS  ...  MULTIPOLYGON (((-87.60914 41.84469, -87.60915 ...
1          OAKLAND  ...  MULTIPOLYGON (((-87.59215 41.81693, -87.59231 ...
2      FULLER PARK  ...  MULTIPOLYGON (((-87.62880 41.80189, -87.62879 ...
3  GRAND BOULEVARD  ...  MULTIPOLYGON (((-87.60671 41.81681, -87.60670 ...
4          KENWOOD  ...  MULTIPOLYGON (((-87.59215 41.81693, -87.59215 ...

[5 rows x 9 columns]

>>> groceries.head()
   OBJECTID     Ycoord  ...  Category                           geometry
0        16  41.973266  ...       NaN  MULTIPOINT ((-87.65661 41.97321))
1        18  41.696367  ...       NaN  MULTIPOINT ((-87.68136 41.69713))
2        22  41.868634  ...       NaN  MULTIPOINT ((-87.63918 41.86847))
3        23  41.877590  ...       new  MULTIPOINT ((-87.65495 41.87783))
4        27  41.737696  ...       NaN  MULTIPOINT ((-87.62715 41.73623))
[5 rows x 8 columns]
>>> groceries_w_communities = groceries.sjoin(chicago)
>>> groceries_w_communities[["OBJECTID", "community", "geometry"]].head()
   OBJECTID       community                           geometry
0        16          UPTOWN  MULTIPOINT ((-87.65661 41.97321))
1        18     MORGAN PARK  MULTIPOINT ((-87.68136 41.69713))
2        22  NEAR WEST SIDE  MULTIPOINT ((-87.63918 41.86847))
3        23  NEAR WEST SIDE  MULTIPOINT ((-87.65495 41.87783))
4        27         CHATHAM  MULTIPOINT ((-87.62715 41.73623))

Notes

Every operation in GeoPandas is planar, i.e. the potential third dimension is not taken into account.

See Also

GeoDataFrame.sjoin_nearest : nearest neighbor join sjoin : equivalent top-level function

sjoin_nearest(right: GeoDataFrame, how: Literal['left', 'right', 'inner'] = 'inner', max_distance: float | None = None, lsuffix: str = 'left', rsuffix: str = 'right', distance_col: str | None = None, exclusive: bool = False) GeoDataFrame

Spatial join of two GeoDataFrames based on the distance between their geometries.

Results will include multiple output records for a single input record where there are multiple equidistant nearest or intersected neighbors.

See the User Guide page https://geopandas.readthedocs.io/en/latest/docs/user_guide/mergingdata.html for more details.

Parameters

right : GeoDataFrame how : string, default ‘inner’

The type of join:

  • ‘left’: use keys from left_df; retain only left_df geometry column

  • ‘right’: use keys from right_df; retain only right_df geometry column

  • ‘inner’: use intersection of keys from both dfs; retain only left_df geometry column

max_distancefloat, default None

Maximum distance within which to query for nearest geometry. Must be greater than 0. The max_distance used to search for nearest items in the tree may have a significant impact on performance by reducing the number of input geometries that are evaluated for nearest items in the tree.

lsuffixstring, default ‘left’

Suffix to apply to overlapping column names (left GeoDataFrame).

rsuffixstring, default ‘right’

Suffix to apply to overlapping column names (right GeoDataFrame).

distance_colstring, default None

If set, save the distances computed between matching geometries under a column of this name in the joined GeoDataFrame.

exclusivebool, optional, default False

If True, the nearest geometries that are equal to the input geometry will not be returned, default False.

Examples

>>> import geodatasets
>>> groceries = geopandas.read_file(
...     geodatasets.get_path("geoda.groceries")
... )
>>> chicago = geopandas.read_file(
...     geodatasets.get_path("geoda.chicago_health")
... ).to_crs(groceries.crs)
>>> chicago.head()
   ComAreaID  ...                                           geometry
0         35  ...  POLYGON ((-87.60914 41.84469, -87.60915 41.844...
1         36  ...  POLYGON ((-87.59215 41.81693, -87.59231 41.816...
2         37  ...  POLYGON ((-87.62880 41.80189, -87.62879 41.801...
3         38  ...  POLYGON ((-87.60671 41.81681, -87.60670 41.816...
4         39  ...  POLYGON ((-87.59215 41.81693, -87.59215 41.816...
[5 rows x 87 columns]
>>> groceries.head()
   OBJECTID     Ycoord  ...  Category                           geometry
0        16  41.973266  ...       NaN  MULTIPOINT ((-87.65661 41.97321))
1        18  41.696367  ...       NaN  MULTIPOINT ((-87.68136 41.69713))
2        22  41.868634  ...       NaN  MULTIPOINT ((-87.63918 41.86847))
3        23  41.877590  ...       new  MULTIPOINT ((-87.65495 41.87783))
4        27  41.737696  ...       NaN  MULTIPOINT ((-87.62715 41.73623))
[5 rows x 8 columns]
>>> groceries_w_communities = groceries.sjoin_nearest(chicago)
>>> groceries_w_communities[["Chain", "community", "geometry"]].head(2)
               Chain    community                                geometry
0     VIET HOA PLAZA       UPTOWN   MULTIPOINT ((1168268.672 1933554.35))
1  COUNTY FAIR FOODS  MORGAN PARK  MULTIPOINT ((1162302.618 1832900.224))

To include the distances:

>>> groceries_w_communities = groceries.sjoin_nearest(chicago, distance_col="distances")
>>> groceries_w_communities[["Chain", "community", "distances"]].head(2)
               Chain    community  distances
0     VIET HOA PLAZA       UPTOWN        0.0
1  COUNTY FAIR FOODS  MORGAN PARK        0.0

In the following example, we get multiple groceries for Uptown because all results are equidistant (in this case zero because they intersect). In fact, we get 4 results in total:

>>> chicago_w_groceries = groceries.sjoin_nearest(chicago, distance_col="distances", how="right")
>>> uptown_results = chicago_w_groceries[chicago_w_groceries["community"] == "UPTOWN"]
>>> uptown_results[["Chain", "community"]]
            Chain community
30  VIET HOA PLAZA    UPTOWN
30      JEWEL OSCO    UPTOWN
30          TARGET    UPTOWN
30       Mariano's    UPTOWN

See Also

GeoDataFrame.sjoin : binary predicate joins sjoin_nearest : equivalent top-level function

Notes

Since this join relies on distances, results will be inaccurate if your geometries are in a geographic CRS.

Every operation in GeoPandas is planar, i.e. the potential third dimension is not taken into account.

clip(mask, keep_geom_type: bool = False, sort: bool = False) GeoDataFrame

Clip points, lines, or polygon geometries to the mask extent.

Both layers must be in the same Coordinate Reference System (CRS). The GeoDataFrame will be clipped to the full extent of the mask object.

If there are multiple polygons in mask, data from the GeoDataFrame will be clipped to the total boundary of all polygons in mask.

Parameters

maskGeoDataFrame, GeoSeries, (Multi)Polygon, list-like

Polygon vector layer used to clip the GeoDataFrame. The mask’s geometry is dissolved into one geometric feature and intersected with GeoDataFrame. If the mask is list-like with four elements (minx, miny, maxx, maxy), clip will use a faster rectangle clipping (clip_by_rect()), possibly leading to slightly different results.

keep_geom_typeboolean, default False

If True, return only geometries of original type in case of intersection resulting in multiple geometry types or GeometryCollections. If False, return all resulting geometries (potentially mixed types).

sortboolean, default False

If True, the order of rows in the clipped GeoDataFrame will be preserved at small performance cost. If False the order of rows in the clipped GeoDataFrame will be random.

Returns

GeoDataFrame

Vector data (points, lines, polygons) from the GeoDataFrame clipped to polygon boundary from mask.

See Also

clip : equivalent top-level function

Examples

Clip points (grocery stores) with polygons (the Near West Side community):

>>> import geodatasets
>>> chicago = geopandas.read_file(
...     geodatasets.get_path("geoda.chicago_health")
... )
>>> near_west_side = chicago[chicago["community"] == "NEAR WEST SIDE"]
>>> groceries = geopandas.read_file(
...     geodatasets.get_path("geoda.groceries")
... ).to_crs(chicago.crs)
>>> groceries.shape
(148, 8)
>>> nws_groceries = groceries.clip(near_west_side)
>>> nws_groceries.shape
(7, 8)
overlay(right: GeoDataFrame, how: Literal['intersection', 'union', 'identity', 'symmetric_difference', 'difference'] = 'intersection', keep_geom_type: bool | None = None, make_valid: bool = True)

Perform spatial overlay between GeoDataFrames.

Currently only supports data GeoDataFrames with uniform geometry types, i.e. containing only (Multi)Polygons, or only (Multi)Points, or a combination of (Multi)LineString and LinearRing shapes. Implements several methods that are all effectively subsets of the union.

See the User Guide page ../../user_guide/set_operations for details.

Parameters

right : GeoDataFrame how : string

Method of spatial overlay: ‘intersection’, ‘union’, ‘identity’, ‘symmetric_difference’ or ‘difference’.

keep_geom_typebool

If True, return only geometries of the same geometry type the GeoDataFrame has, if False, return all resulting geometries. Default is None, which will set keep_geom_type to True but warn upon dropping geometries.

make_validbool, default True

If True, any invalid input geometries are corrected with a call to make_valid(), if False, a ValueError is raised if any input geometries are invalid.

Returns

dfGeoDataFrame

GeoDataFrame with new set of polygons and attributes resulting from the overlay

Examples

>>> from shapely.geometry import Polygon
>>> polys1 = geopandas.GeoSeries([Polygon([(0,0), (2,0), (2,2), (0,2)]),
...                               Polygon([(2,2), (4,2), (4,4), (2,4)])])
>>> polys2 = geopandas.GeoSeries([Polygon([(1,1), (3,1), (3,3), (1,3)]),
...                               Polygon([(3,3), (5,3), (5,5), (3,5)])])
>>> df1 = geopandas.GeoDataFrame({'geometry': polys1, 'df1_data':[1,2]})
>>> df2 = geopandas.GeoDataFrame({'geometry': polys2, 'df2_data':[1,2]})
>>> df1.overlay(df2, how='union')
   df1_data  df2_data                                           geometry
0       1.0       1.0                POLYGON ((2 2, 2 1, 1 1, 1 2, 2 2))
1       2.0       1.0                POLYGON ((2 2, 2 3, 3 3, 3 2, 2 2))
2       2.0       2.0                POLYGON ((4 4, 4 3, 3 3, 3 4, 4 4))
3       1.0       NaN      POLYGON ((2 0, 0 0, 0 2, 1 2, 1 1, 2 1, 2 0))
4       2.0       NaN  MULTIPOLYGON (((3 4, 3 3, 2 3, 2 4, 3 4)), ((4...
5       NaN       1.0  MULTIPOLYGON (((2 3, 2 2, 1 2, 1 3, 2 3)), ((3...
6       NaN       2.0      POLYGON ((3 5, 5 5, 5 3, 4 3, 4 4, 3 4, 3 5))
>>> df1.overlay(df2, how='intersection')
   df1_data  df2_data                             geometry
0         1         1  POLYGON ((2 2, 2 1, 1 1, 1 2, 2 2))
1         2         1  POLYGON ((2 2, 2 3, 3 3, 3 2, 2 2))
2         2         2  POLYGON ((4 4, 4 3, 3 3, 3 4, 4 4))
>>> df1.overlay(df2, how='symmetric_difference')
   df1_data  df2_data                                           geometry
0       1.0       NaN      POLYGON ((2 0, 0 0, 0 2, 1 2, 1 1, 2 1, 2 0))
1       2.0       NaN  MULTIPOLYGON (((3 4, 3 3, 2 3, 2 4, 3 4)), ((4...
2       NaN       1.0  MULTIPOLYGON (((2 3, 2 2, 1 2, 1 3, 2 3)), ((3...
3       NaN       2.0      POLYGON ((3 5, 5 5, 5 3, 4 3, 4 4, 3 4, 3 5))
>>> df1.overlay(df2, how='difference')
                                            geometry  df1_data
0      POLYGON ((2 0, 0 0, 0 2, 1 2, 1 1, 2 1, 2 0))         1
1  MULTIPOLYGON (((3 4, 3 3, 2 3, 2 4, 3 4)), ((4...         2
>>> df1.overlay(df2, how='identity')
   df1_data  df2_data                                           geometry
0         1       1.0                POLYGON ((2 2, 2 1, 1 1, 1 2, 2 2))
1         2       1.0                POLYGON ((2 2, 2 3, 3 3, 3 2, 2 2))
2         2       2.0                POLYGON ((4 4, 4 3, 3 3, 3 4, 4 4))
3         1       NaN      POLYGON ((2 0, 0 0, 0 2, 1 2, 1 1, 2 1, 2 0))
4         2       NaN  MULTIPOLYGON (((3 4, 3 3, 2 3, 2 4, 3 4)), ((4...

See Also

GeoDataFrame.sjoin : spatial join overlay : equivalent top-level function

Notes

Every operation in GeoPandas is planar, i.e. the potential third dimension is not taken into account.

class vibespatial.GeoSeries(data=None, index=None, crs: Any | None = None, **kwargs)

A Series object designed to store shapely geometry objects.

Parameters

dataarray-like, dict, scalar value

The geometries to store in the GeoSeries.

indexarray-like or Index

The index for the GeoSeries.

crsvalue (optional)

Coordinate Reference System of the geometry objects. Can be anything accepted by pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.

kwargs
Additional arguments passed to the Series constructor,

e.g. name.

Examples

>>> from shapely.geometry import Point
>>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)])
>>> s
0    POINT (1 1)
1    POINT (2 2)
2    POINT (3 3)
dtype: geometry
>>> s = geopandas.GeoSeries(
...     [Point(1, 1), Point(2, 2), Point(3, 3)], crs="EPSG:3857"
... )
>>> s.crs
<Projected CRS: EPSG:3857>
Name: WGS 84 / Pseudo-Mercator
Axis Info [cartesian]:
- X[east]: Easting (metre)
- Y[north]: Northing (metre)
Area of Use:
- name: World - 85°S to 85°N
- bounds: (-180.0, -85.06, 180.0, 85.06)
Coordinate Operation:
- name: Popular Visualisation Pseudo-Mercator
- method: Popular Visualisation Pseudo Mercator
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
>>> s = geopandas.GeoSeries(
...    [Point(1, 1), Point(2, 2), Point(3, 3)], index=["a", "b", "c"], crs=4326
... )
>>> s
a    POINT (1 1)
b    POINT (2 2)
c    POINT (3 3)
dtype: geometry
>>> s.crs
<Geographic 2D CRS: EPSG:4326>
Name: WGS 84
Axis Info [ellipsoidal]:
- Lat[north]: Geodetic latitude (degree)
- Lon[east]: Geodetic longitude (degree)
Area of Use:
- name: World.
- bounds: (-180.0, -90.0, 180.0, 90.0)
Datum: World Geodetic System 1984 ensemble
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich

See Also

GeoDataFrame pandas.Series

property geometry: GeoSeries
property x: pandas.Series

Return the x location of point geometries in a GeoSeries.

Returns

pandas.Series

Examples

>>> from shapely.geometry import Point
>>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)])
>>> s.x
0    1.0
1    2.0
2    3.0
dtype: float64

See Also

GeoSeries.y GeoSeries.z

property y: pandas.Series

Return the y location of point geometries in a GeoSeries.

Returns

pandas.Series

Examples

>>> from shapely.geometry import Point
>>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)])
>>> s.y
0    1.0
1    2.0
2    3.0
dtype: float64

See Also

GeoSeries.x GeoSeries.z GeoSeries.m

property z: pandas.Series

Return the z location of point geometries in a GeoSeries.

Returns

pandas.Series

Examples

>>> from shapely.geometry import Point
>>> s = geopandas.GeoSeries([Point(1, 1, 1), Point(2, 2, 2), Point(3, 3, 3)])
>>> s.z
0    1.0
1    2.0
2    3.0
dtype: float64

See Also

GeoSeries.x GeoSeries.y GeoSeries.m

property m: pandas.Series

Return the m coordinate of point geometries in a GeoSeries.

Requires Shapely >= 2.1.

Added in version 1.1.0.

Returns

pandas.Series

Examples

>>> from shapely.geometry import Point
>>> s = geopandas.GeoSeries.from_wkt(
...     [
...         "POINT M (2 3 5)",
...         "POINT M (1 2 3)",
...     ]
... )
>>> s
0    POINT M (2 3 5)
1    POINT M (1 2 3)
dtype: geometry
>>> s.m
0    5.0
1    3.0
dtype: float64

See Also

GeoSeries.x GeoSeries.y GeoSeries.z

classmethod from_file(filename: os.PathLike | IO, **kwargs) GeoSeries

Alternate constructor to create a GeoSeries from a file.

Can load a GeoSeries from a file from any format recognized by pyogrio. See http://pyogrio.readthedocs.io/ for details. From a file with attributes loads only geometry column. Note that to do that, GeoPandas first loads the whole GeoDataFrame.

Parameters

filenamestr

File path or file handle to read from. Depending on which kwargs are included, the content of filename may vary. See pyogrio.read_dataframe() for usage details.

kwargskey-word arguments

These arguments are passed to pyogrio.read_dataframe(), and can be used to access multi-layer data, data stored within archives (zip files), etc.

Examples

>>> import geodatasets
>>> path = geodatasets.get_path('nybb')
>>> s = geopandas.GeoSeries.from_file(path)
>>> s
0    MULTIPOLYGON (((970217.022 145643.332, 970227....
1    MULTIPOLYGON (((1029606.077 156073.814, 102957...
2    MULTIPOLYGON (((1021176.479 151374.797, 102100...
3    MULTIPOLYGON (((981219.056 188655.316, 980940....
4    MULTIPOLYGON (((1012821.806 229228.265, 101278...
Name: geometry, dtype: geometry

See Also

read_file : read file to GeoDataFrame

classmethod from_wkb(data, index=None, crs: Any | None = None, on_invalid='raise', **kwargs) GeoSeries

Alternate constructor to create a GeoSeries from a list or array of WKB objects.

Parameters

dataarray-like or Series

Series, list or array of WKB objects

indexarray-like or Index

The index for the GeoSeries.

crsvalue, optional

Coordinate Reference System of the geometry objects. Can be anything accepted by pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.

on_invalid: {“raise”, “warn”, “ignore”}, default “raise”
  • raise: an exception will be raised if a WKB input geometry is invalid.

  • warn: a warning will be raised and invalid WKB geometries will be returned as None.

  • ignore: invalid WKB geometries will be returned as None without a warning.

  • fix: an effort is made to fix invalid input geometries (e.g. close unclosed rings). If this is not possible, they are returned as None without a warning. Requires GEOS >= 3.11 and shapely >= 2.1.

kwargs

Additional arguments passed to the Series constructor, e.g. name.

Returns

GeoSeries

See Also

GeoSeries.from_wkt

Examples

>>> wkbs = [
... (
...     b"\x01\x01\x00\x00\x00\x00\x00\x00\x00"
...     b"\x00\x00\xf0?\x00\x00\x00\x00\x00\x00\xf0?"
... ),
... (
...     b"\x01\x01\x00\x00\x00\x00\x00\x00\x00"
...     b"\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00@"
... ),
... (
...    b"\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00"
...    b"\x00\x08@\x00\x00\x00\x00\x00\x00\x08@"
... ),
... ]
>>> s = geopandas.GeoSeries.from_wkb(wkbs)
>>> s
0    POINT (1 1)
1    POINT (2 2)
2    POINT (3 3)
dtype: geometry
classmethod from_wkt(data, index=None, crs: Any | None = None, on_invalid='raise', **kwargs) GeoSeries

Alternate constructor to create a GeoSeries from a list or array of WKT objects.

Parameters

dataarray-like, Series

Series, list, or array of WKT objects

indexarray-like or Index

The index for the GeoSeries.

crsvalue, optional

Coordinate Reference System of the geometry objects. Can be anything accepted by pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.

on_invalid{“raise”, “warn”, “ignore”}, default “raise”
  • raise: an exception will be raised if a WKT input geometry is invalid.

  • warn: a warning will be raised and invalid WKT geometries will be returned as None.

  • ignore: invalid WKT geometries will be returned as None without a warning.

  • fix: an effort is made to fix invalid input geometries (e.g. close unclosed rings). If this is not possible, they are returned as None without a warning. Requires GEOS >= 3.11 and shapely >= 2.1.

kwargs

Additional arguments passed to the Series constructor, e.g. name.

Returns

GeoSeries

See Also

GeoSeries.from_wkb

Examples

>>> wkts = [
... 'POINT (1 1)',
... 'POINT (2 2)',
... 'POINT (3 3)',
... ]
>>> s = geopandas.GeoSeries.from_wkt(wkts)
>>> s
0    POINT (1 1)
1    POINT (2 2)
2    POINT (3 3)
dtype: geometry
classmethod from_xy(x, y, z=None, index=None, crs=None, **kwargs) GeoSeries

Alternate constructor to create a GeoSeries of Point geometries from lists or arrays of x, y(, z) coordinates.

In case of geographic coordinates, it is assumed that longitude is captured by x coordinates and latitude by y.

Parameters

x, y, z : iterable index : array-like or Index, optional

The index for the GeoSeries. If not given and all coordinate inputs are Series with an equal index, that index is used.

crsvalue, optional

Coordinate Reference System of the geometry objects. Can be anything accepted by pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.

**kwargs

Additional arguments passed to the Series constructor, e.g. name.

Returns

GeoSeries

See Also

GeoSeries.from_wkt points_from_xy

Examples

>>> x = [2.5, 5, -3.0]
>>> y = [0.5, 1, 1.5]
>>> s = geopandas.GeoSeries.from_xy(x, y, crs="EPSG:4326")
>>> s
0    POINT (2.5 0.5)
1    POINT (5 1)
2    POINT (-3 1.5)
dtype: geometry
classmethod from_arrow(arr, **kwargs) GeoSeries

Construct a GeoSeries from an Arrow array object with a GeoArrow extension type.

See https://geoarrow.org/ for details on the GeoArrow specification.

This functions accepts any Arrow array object implementing the Arrow PyCapsule Protocol (i.e. having an __arrow_c_array__ method).

Added in version 1.0.

Parameters

arrpyarrow.Array, Arrow array

Any array object implementing the Arrow PyCapsule Protocol (i.e. has an __arrow_c_array__ or __arrow_c_stream__ method). The type of the array should be one of the geoarrow geometry types.

**kwargs

Other parameters passed to the GeoSeries constructor.

Returns

GeoSeries

See Also

GeoSeries.to_arrow

Examples

>>> import geoarrow.pyarrow as ga
>>> array = ga.as_geoarrow(
... [None, "POLYGON ((0 0, 1 1, 0 1, 0 0))", "LINESTRING (0 0, -1 1, 0 -1)"])
>>> geoseries = geopandas.GeoSeries.from_arrow(array)
>>> geoseries
0                              None
1    POLYGON ((0 0, 1 1, 0 1, 0 0))
2      LINESTRING (0 0, -1 1, 0 -1)
dtype: geometry
to_file(filename: os.PathLike | IO, driver: str | None = None, index: bool | None = None, **kwargs)

Write the GeoSeries to a file.

By default, an ESRI shapefile is written, but any OGR data source supported by Pyogrio or Fiona can be written.

Parameters

filenamestring

File path or file handle to write to. The path may specify a GDAL VSI scheme.

driverstring, default None

The OGR format driver used to write the vector file. If not specified, it attempts to infer it from the file extension. If no extension is specified, it saves ESRI Shapefile to a folder.

indexbool, default None

If True, write index into one or more columns (for MultiIndex). Default None writes the index into one or more columns only if the index is named, is a MultiIndex, or has a non-integer data type. If False, no index is written.

Added in version 0.7: Previously the index was not written.

modestring, default ‘w’

The write mode, ‘w’ to overwrite the existing file and ‘a’ to append. Not all drivers support appending. The drivers that support appending are listed in fiona.supported_drivers or https://github.com/Toblerity/Fiona/blob/master/fiona/drvsupport.py

crspyproj.CRS, default None

If specified, the CRS is passed to Fiona to better control how the file is written. If None, GeoPandas will determine the crs based on crs df attribute. The value can be anything accepted by pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string. The keyword is not supported for the “pyogrio” engine.

enginestr, “pyogrio” or “fiona”

The underlying library that is used to write the file. Currently, the supported options are “pyogrio” and “fiona”. Defaults to “pyogrio” if installed, otherwise tries “fiona”.

**kwargs :

Keyword args to be passed to the engine, and can be used to write to multi-layer data, store data within archives (zip files), etc. In case of the “pyogrio” engine, the keyword arguments are passed to pyogrio.write_dataframe. In case of the “fiona” engine, the keyword arguments are passed to fiona.open`. For more information on possible keywords, type: import pyogrio; help(pyogrio.write_dataframe).

See Also

GeoDataFrame.to_file : write GeoDataFrame to file read_file : read file to GeoDataFrame

Examples

>>> s.to_file('series.shp')
>>> s.to_file('series.gpkg', driver='GPKG', layer='name1')
>>> s.to_file('series.geojson', driver='GeoJSON')
sort_index(*args, **kwargs)

Sort Series by index labels.

Returns a new Series sorted by label if inplace argument is False, otherwise updates the original series and returns None.

Parameters

axis{0 or ‘index’}

Unused. Parameter needed for compatibility with DataFrame.

levelint, optional

If not None, sort on values in specified index level(s).

ascendingbool or list-like of bools, default True

Sort ascending vs. descending. When the index is a MultiIndex the sort direction can be controlled for each level individually.

inplacebool, default False

If True, perform operation in-place.

kind{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, default ‘quicksort’

Choice of sorting algorithm. See also numpy.sort() for more information. ‘mergesort’ and ‘stable’ are the only stable algorithms. For DataFrames, this option is only applied when sorting on a single column or label.

na_position{‘first’, ‘last’}, default ‘last’

If ‘first’ puts NaNs at the beginning, ‘last’ puts NaNs at the end. Not implemented for MultiIndex.

sort_remainingbool, default True

If True and sorting by level and index is multilevel, sort by other levels too (in order) after sorting by specified level.

ignore_indexbool, default False

If True, the resulting axis will be labeled 0, 1, …, n - 1.

keycallable, optional

If not None, apply the key function to the index values before sorting. This is similar to the key argument in the builtin sorted() function, with the notable difference that this key function should be vectorized. It should expect an Index and return an Index of the same shape.

Returns

Series or None

The original Series sorted by the labels or None if inplace=True.

See Also

DataFrame.sort_index: Sort DataFrame by the index. DataFrame.sort_values: Sort DataFrame by the value. Series.sort_values : Sort Series by the value.

Examples

>>> s = pd.Series(["a", "b", "c", "d"], index=[3, 2, 1, 4])
>>> s.sort_index()
1    c
2    b
3    a
4    d
dtype: str

Sort Descending

>>> s.sort_index(ascending=False)
4    d
3    a
2    b
1    c
dtype: str

By default NaNs are put at the end, but use na_position to place them at the beginning

>>> s = pd.Series(["a", "b", "c", "d"], index=[3, 2, 1, np.nan])
>>> s.sort_index(na_position="first")
NaN     d
 1.0    c
 2.0    b
 3.0    a
dtype: str

Specify index level to sort

>>> arrays = [
...     np.array(["qux", "qux", "foo", "foo", "baz", "baz", "bar", "bar"]),
...     np.array(["two", "one", "two", "one", "two", "one", "two", "one"]),
... ]
>>> s = pd.Series([1, 2, 3, 4, 5, 6, 7, 8], index=arrays)
>>> s.sort_index(level=1)
bar  one    8
baz  one    6
foo  one    4
qux  one    2
bar  two    7
baz  two    5
foo  two    3
qux  two    1
dtype: int64

Does not sort by remaining levels when sorting by levels

>>> s.sort_index(level=1, sort_remaining=False)
qux  one    2
foo  one    4
baz  one    6
bar  one    8
qux  two    1
foo  two    3
baz  two    5
bar  two    7
dtype: int64

Apply a key function before sorting

>>> s = pd.Series([1, 2, 3, 4], index=["A", "b", "C", "d"])
>>> s.sort_index(key=lambda x: x.str.lower())
A    1
b    2
C    3
d    4
dtype: int64
take(*args, **kwargs)

Return the elements in the given positional indices along an axis.

This means that we are not indexing according to actual values in the index attribute of the object. We are indexing according to the actual position of the element in the object.

Parameters

indicesarray-like

An array of ints indicating which positions to take.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis on which to select elements. 0 means that we are selecting rows, 1 means that we are selecting columns. For Series this parameter is unused and defaults to 0.

**kwargs

For compatibility with numpy.take(). Has no effect on the output.

Returns

same type as caller

An array-like containing the elements taken from the object.

See Also

DataFrame.loc : Select a subset of a DataFrame by labels. DataFrame.iloc : Select a subset of a DataFrame by positions. numpy.take : Take elements from an array along an axis.

Examples

>>> df = pd.DataFrame(
...     [
...         ("falcon", "bird", 389.0),
...         ("parrot", "bird", 24.0),
...         ("lion", "mammal", 80.5),
...         ("monkey", "mammal", np.nan),
...     ],
...     columns=["name", "class", "max_speed"],
...     index=[0, 2, 3, 1],
... )
>>> df
     name   class  max_speed
0  falcon    bird      389.0
2  parrot    bird       24.0
3    lion  mammal       80.5
1  monkey  mammal        NaN

Take elements at positions 0 and 3 along the axis 0 (default).

Note how the actual indices selected (0 and 1) do not correspond to our selected indices 0 and 3. That’s because we are selecting the 0th and 3rd rows, not rows whose indices equal 0 and 3.

>>> df.take([0, 3])
     name   class  max_speed
0  falcon    bird      389.0
1  monkey  mammal        NaN

Take elements at indices 1 and 2 along the axis 1 (column selection).

>>> df.take([1, 2], axis=1)
    class  max_speed
0    bird      389.0
2    bird       24.0
3  mammal       80.5
1  mammal        NaN

We may take elements using negative integers for positive indices, starting from the end of the object, just like with Python lists.

>>> df.take([-1, -2])
     name   class  max_speed
1  monkey  mammal        NaN
3    lion  mammal       80.5
apply(func, convert_dtype: bool | None = None, args=(), **kwargs)

Invoke function on values of Series.

Can be ufunc (a NumPy function that applies to the entire Series) or a Python function that only works on single values.

Parameters

funcfunction

Python function or NumPy ufunc to apply.

argstuple

Positional arguments passed to func after the series value.

by_rowFalse or “compat”, default “compat”

If "compat" and func is a callable, func will be passed each element of the Series, like Series.map. If func is a list or dict of callables, will first try to translate each func into pandas methods. If that doesn’t work, will try call to apply again with by_row="compat" and if that fails, will call apply again with by_row=False (backward compatible). If False, the func will be passed the whole Series at once.

by_row has no effect when func is a string.

Added in version 2.1.0.

**kwargs

Additional keyword arguments passed to func.

Returns

Series or DataFrame

If func returns a Series object the result will be a DataFrame.

See Also

Series.map: For element-wise operations. Series.agg: Only perform aggregating type operations. Series.transform: Only perform transforming type operations.

Notes

Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See gotchas.udf-mutation for more details.

Examples

Create a series with typical summer temperatures for each city.

>>> s = pd.Series([20, 21, 12], index=["London", "New York", "Helsinki"])
>>> s
London      20
New York    21
Helsinki    12
dtype: int64

Square the values by defining a function and passing it as an argument to apply().

>>> def square(x):
...     return x**2
>>> s.apply(square)
London      400
New York    441
Helsinki    144
dtype: int64

Square the values by passing an anonymous function as an argument to apply().

>>> s.apply(lambda x: x**2)
London      400
New York    441
Helsinki    144
dtype: int64

Define a custom function that needs additional positional arguments and pass these additional arguments using the args keyword.

>>> def subtract_custom_value(x, custom_value):
...     return x - custom_value
>>> s.apply(subtract_custom_value, args=(5,))
London      15
New York    16
Helsinki     7
dtype: int64

Define a custom function that takes keyword arguments and pass these arguments to apply.

>>> def add_custom_values(x, **kwargs):
...     for month in kwargs:
...         x += kwargs[month]
...     return x
>>> s.apply(add_custom_values, june=30, july=20, august=25)
London      95
New York    96
Helsinki    87
dtype: int64

Use a function from the Numpy library.

>>> s.apply(np.log)
London      2.995732
New York    3.044522
Helsinki    2.484907
dtype: float64
isna() pandas.Series

Detect missing values.

Historically, NA values in a GeoSeries could be represented by empty geometric objects, in addition to standard representations such as None and np.nan. This behaviour is changed in version 0.6.0, and now only actual missing values return True. To detect empty geometries, use GeoSeries.is_empty instead.

Returns

A boolean pandas Series of the same size as the GeoSeries, True where a value is NA.

Examples

>>> from shapely.geometry import Polygon
>>> s = geopandas.GeoSeries(
...     [Polygon([(0, 0), (1, 1), (0, 1)]), None, Polygon([])]
... )
>>> s
0    POLYGON ((0 0, 1 1, 0 1, 0 0))
1                              None
2                     POLYGON EMPTY
dtype: geometry
>>> s.isna()
0    False
1     True
2    False
dtype: bool

See Also

GeoSeries.notna : inverse of isna GeoSeries.is_empty : detect empty geometries

isnull() pandas.Series

Alias for isna method. See isna for more detail.

notna() pandas.Series

Detect non-missing values.

Historically, NA values in a GeoSeries could be represented by empty geometric objects, in addition to standard representations such as None and np.nan. This behaviour is changed in version 0.6.0, and now only actual missing values return False. To detect empty geometries, use ~GeoSeries.is_empty instead.

Returns

A boolean pandas Series of the same size as the GeoSeries, False where a value is NA.

Examples

>>> from shapely.geometry import Polygon
>>> s = geopandas.GeoSeries(
...     [Polygon([(0, 0), (1, 1), (0, 1)]), None, Polygon([])]
... )
>>> s
0    POLYGON ((0 0, 1 1, 0 1, 0 0))
1                              None
2                     POLYGON EMPTY
dtype: geometry
>>> s.notna()
0     True
1    False
2     True
dtype: bool

See Also

GeoSeries.isna : inverse of notna GeoSeries.is_empty : detect empty geometries

notnull() pandas.Series

Alias for notna method. See notna for more detail.

fillna(value=None, inplace: bool = False, limit=None, **kwargs)

Fill NA values with geometry (or geometries).

Parameters

valueshapely geometry or GeoSeries, default None

If None is passed, NA values will be filled with GEOMETRYCOLLECTION EMPTY. If a shapely geometry object is passed, it will be used to fill all missing values. If a GeoSeries or GeometryArray are passed, missing values will be filled based on the corresponding index locations. If pd.NA or np.nan are passed, values will be filled with None (not GEOMETRYCOLLECTION EMPTY).

limitint, default None

This is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.

Returns

GeoSeries

Examples

>>> from shapely.geometry import Polygon
>>> s = geopandas.GeoSeries(
...     [
...         Polygon([(0, 0), (1, 1), (0, 1)]),
...         None,
...         Polygon([(0, 0), (-1, 1), (0, -1)]),
...     ]
... )
>>> s
0      POLYGON ((0 0, 1 1, 0 1, 0 0))
1                                None
2    POLYGON ((0 0, -1 1, 0 -1, 0 0))
dtype: geometry

Filled with an empty polygon.

>>> s.fillna()
0      POLYGON ((0 0, 1 1, 0 1, 0 0))
1            GEOMETRYCOLLECTION EMPTY
2    POLYGON ((0 0, -1 1, 0 -1, 0 0))
dtype: geometry

Filled with a specific polygon.

>>> s.fillna(Polygon([(0, 1), (2, 1), (1, 2)]))
0      POLYGON ((0 0, 1 1, 0 1, 0 0))
1      POLYGON ((0 1, 2 1, 1 2, 0 1))
2    POLYGON ((0 0, -1 1, 0 -1, 0 0))
dtype: geometry

Filled with another GeoSeries.

>>> from shapely.geometry import Point
>>> s_fill = geopandas.GeoSeries(
...     [
...         Point(0, 0),
...         Point(1, 1),
...         Point(2, 2),
...     ]
... )
>>> s.fillna(s_fill)
0      POLYGON ((0 0, 1 1, 0 1, 0 0))
1                         POINT (1 1)
2    POLYGON ((0 0, -1 1, 0 -1, 0 0))
dtype: geometry

See Also

GeoSeries.isna : detect missing values

plot(*args, **kwargs)
explore(*args, **kwargs)

Explore with an interactive map based on folium/leaflet.js.

explode(ignore_index=False, index_parts=False) GeoSeries

Explode multi-part geometries into multiple single geometries.

Single rows can become multiple rows. This is analogous to PostGIS’s ST_Dump(). The ‘path’ index is the second level of the returned MultiIndex

Parameters

ignore_indexbool, default False

If True, the resulting index will be labelled 0, 1, …, n - 1, ignoring index_parts.

index_partsboolean, default False

If True, the resulting index will be a multi-index (original index with an additional level indicating the multiple geometries: a new zero-based index for each single part geometry per multi-part geometry).

Returns

A GeoSeries with a MultiIndex. The levels of the MultiIndex are the original index and a zero-based integer index that counts the number of single geometries within a multi-part geometry.

Examples

>>> from shapely.geometry import MultiPoint
>>> s = geopandas.GeoSeries(
...     [MultiPoint([(0, 0), (1, 1)]), MultiPoint([(2, 2), (3, 3), (4, 4)])]
... )
>>> s
0           MULTIPOINT ((0 0), (1 1))
1    MULTIPOINT ((2 2), (3 3), (4 4))
dtype: geometry
>>> s.explode(index_parts=True)
0  0    POINT (0 0)
   1    POINT (1 1)
1  0    POINT (2 2)
   1    POINT (3 3)
   2    POINT (4 4)
dtype: geometry

See Also

GeoDataFrame.explode

set_crs(crs: Any | None = None, epsg: int | None = None, inplace: bool = False, allow_override: bool = False)

Set the Coordinate Reference System (CRS) of a GeoSeries.

Pass None to remove CRS from the GeoSeries.

Notes

The underlying geometries are not transformed to this CRS. To transform the geometries to a new CRS, use the to_crs method.

Parameters

crspyproj.CRS | None, optional

The value can be anything accepted by pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.

epsgint, optional if crs is specified

EPSG code specifying the projection.

inplacebool, default False

If True, the CRS of the GeoSeries will be changed in place (while still returning the result) instead of making a copy of the GeoSeries.

allow_overridebool, default False

If the the GeoSeries already has a CRS, allow to replace the existing CRS, even when both are not equal.

Returns

GeoSeries

Examples

>>> from shapely.geometry import Point
>>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)])
>>> s
0    POINT (1 1)
1    POINT (2 2)
2    POINT (3 3)
dtype: geometry

Setting CRS to a GeoSeries without one:

>>> s.crs is None
True
>>> s = s.set_crs('epsg:3857')
>>> s.crs
<Projected CRS: EPSG:3857>
Name: WGS 84 / Pseudo-Mercator
Axis Info [cartesian]:
- X[east]: Easting (metre)
- Y[north]: Northing (metre)
Area of Use:
- name: World - 85°S to 85°N
- bounds: (-180.0, -85.06, 180.0, 85.06)
Coordinate Operation:
- name: Popular Visualisation Pseudo-Mercator
- method: Popular Visualisation Pseudo Mercator
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich

Overriding existing CRS:

>>> s = s.set_crs(4326, allow_override=True)

Without allow_override=True, set_crs returns an error if you try to override CRS.

See Also

GeoSeries.to_crs : re-project to another CRS

to_crs(crs: Any | None = None, epsg: int | None = None) GeoSeries

Return a GeoSeries with all geometries transformed to a new coordinate reference system.

Transform all geometries in a GeoSeries to a different coordinate reference system. The crs attribute on the current GeoSeries must be set. Either crs or epsg may be specified for output.

This method will transform all points in all objects. It has no notion of projecting entire geometries. All segments joining points are assumed to be lines in the current projection, not geodesics. Objects crossing the dateline (or other projection boundary) will have undesirable behavior.

Parameters

crspyproj.CRS, optional if epsg is specified

The value can be anything accepted by pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.

epsgint, optional if crs is specified

EPSG code specifying output projection.

Returns

GeoSeries

Examples

>>> from shapely.geometry import Point
>>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)], crs=4326)
>>> s
0    POINT (1 1)
1    POINT (2 2)
2    POINT (3 3)
dtype: geometry
>>> s.crs
<Geographic 2D CRS: EPSG:4326>
Name: WGS 84
Axis Info [ellipsoidal]:
- Lat[north]: Geodetic latitude (degree)
- Lon[east]: Geodetic longitude (degree)
Area of Use:
- name: World
- bounds: (-180.0, -90.0, 180.0, 90.0)
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
>>> s = s.to_crs(3857)
>>> s
0    POINT (111319.491 111325.143)
1    POINT (222638.982 222684.209)
2    POINT (333958.472 334111.171)
dtype: geometry
>>> s.crs
<Projected CRS: EPSG:3857>
Name: WGS 84 / Pseudo-Mercator
Axis Info [cartesian]:
- X[east]: Easting (metre)
- Y[north]: Northing (metre)
Area of Use:
- name: World - 85°S to 85°N
- bounds: (-180.0, -85.06, 180.0, 85.06)
Coordinate Operation:
- name: Popular Visualisation Pseudo-Mercator
- method: Popular Visualisation Pseudo Mercator
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich

See Also

GeoSeries.set_crs : assign CRS

estimate_utm_crs(datum_name: str = 'WGS 84')

Return the estimated UTM CRS based on the bounds of the dataset.

Added in version 0.9.

Parameters

datum_namestr, optional

The name of the datum to use in the query. Default is WGS 84.

Returns

pyproj.CRS

Examples

>>> import geodatasets
>>> df = geopandas.read_file(
...     geodatasets.get_path("geoda.chicago_health")
... )
>>> df.geometry.estimate_utm_crs()
<Derived Projected CRS: EPSG:32616>
Name: WGS 84 / UTM zone 16N
Axis Info [cartesian]:
- E[east]: Easting (metre)
- N[north]: Northing (metre)
Area of Use:
- name: Between 90°W and 84°W, northern hemisphere between equator and 84°N, ...
- bounds: (-90.0, 0.0, -84.0, 84.0)
Coordinate Operation:
- name: UTM zone 16N
- method: Transverse Mercator
Datum: World Geodetic System 1984 ensemble
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
to_json(show_bbox: bool = True, drop_id: bool = False, to_wgs84: bool = False, **kwargs) str

Return a GeoJSON string representation of the GeoSeries.

Parameters

show_bboxbool, optional, default: True

Include bbox (bounds) in the geojson

drop_idbool, default: False

Whether to retain the index of the GeoSeries as the id property in the generated GeoJSON. Default is False, but may want True if the index is just arbitrary row numbers.

to_wgs84: bool, optional, default: False

If the CRS is set on the active geometry column it is exported as WGS84 (EPSG:4326) to meet the 2016 GeoJSON specification. Set to True to force re-projection and set to False to ignore CRS. False by default.

kwargs that will be passed to json.dumps().

Returns

JSON string

Examples

>>> from shapely.geometry import Point
>>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)])
>>> s
0    POINT (1 1)
1    POINT (2 2)
2    POINT (3 3)
dtype: geometry
>>> s.to_json()
'{"type": "FeatureCollection", "features": [{"id": "0", "type": "Feature", "properties": {}, "geometry": {"type": "Point", "coordinates": [1.0, 1.0]}, "bbox": [1.0, 1.0, 1.0, 1.0]}, {"id": "1", "type": "Feature", "properties": {}, "geometry": {"type": "Point", "coordinates": [2.0, 2.0]}, "bbox": [2.0, 2.0, 2.0, 2.0]}, {"id": "2", "type": "Feature", "properties": {}, "geometry": {"type": "Point", "coordinates": [3.0, 3.0]}, "bbox": [3.0, 3.0, 3.0, 3.0]}], "bbox": [1.0, 1.0, 3.0, 3.0]}'

See Also

GeoSeries.to_file : write GeoSeries to file

to_wkb(hex: bool = False, **kwargs) pandas.Series

Convert GeoSeries geometries to WKB.

Parameters

hexbool

If true, export the WKB as a hexadecimal string. The default is to return a binary bytes object.

kwargs

Additional keyword args will be passed to shapely.to_wkb().

Returns

Series

WKB representations of the geometries

See Also

GeoSeries.to_wkt

Examples

>>> from shapely.geometry import Point, Polygon
>>> s = geopandas.GeoSeries(
...     [
...         Point(0, 0),
...         Polygon(),
...         Polygon([(0, 0), (1, 1), (1, 0)]),
...         None,
...     ]
... )
>>> s.to_wkb()
0    b'\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00...
1              b'\x01\x03\x00\x00\x00\x00\x00\x00\x00'
2    b'\x01\x03\x00\x00\x00\x01\x00\x00\x00\x04\x00...
3                                                 None
dtype: object
>>> s.to_wkb(hex=True)
0           010100000000000000000000000000000000000000
1                                   010300000000000000
2    0103000000010000000400000000000000000000000000...
3                                                  NaN
dtype: str
to_wkt(**kwargs) pandas.Series

Convert GeoSeries geometries to WKT.

Parameters

kwargs

Keyword args will be passed to shapely.to_wkt().

Returns

Series

WKT representations of the geometries

Examples

>>> from shapely.geometry import Point
>>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)])
>>> s
0    POINT (1 1)
1    POINT (2 2)
2    POINT (3 3)
dtype: geometry
>>> s.to_wkt()
0    POINT (1 1)
1    POINT (2 2)
2    POINT (3 3)
dtype: str

See Also

GeoSeries.to_wkb

to_arrow(geometry_encoding='WKB', interleaved=True, include_z=None)

Encode a GeoSeries to GeoArrow format.

See https://geoarrow.org/ for details on the GeoArrow specification.

This functions returns a generic Arrow array object implementing the Arrow PyCapsule Protocol (i.e. having an __arrow_c_array__ method). This object can then be consumed by your Arrow implementation of choice that supports this protocol.

Added in version 1.0.

Parameters

geometry_encoding{‘WKB’, ‘geoarrow’ }, default ‘WKB’

The GeoArrow encoding to use for the data conversion.

interleavedbool, default True

Only relevant for ‘geoarrow’ encoding. If True, the geometries’ coordinates are interleaved in a single fixed size list array. If False, the coordinates are stored as separate arrays in a struct type.

include_zbool, default None

Only relevant for ‘geoarrow’ encoding (for WKB, the dimensionality of the individual geometries is preserved). If False, return 2D geometries. If True, include the third dimension in the output (if a geometry has no third dimension, the z-coordinates will be NaN). By default, will infer the dimensionality from the input geometries. Note that this inference can be unreliable with empty geometries (for a guaranteed result, it is recommended to specify the keyword).

Returns

GeoArrowArray

A generic Arrow array object with geometry data encoded to GeoArrow.

Examples

>>> from shapely.geometry import Point
>>> gser = geopandas.GeoSeries([Point(1, 2), Point(2, 1)])
>>> gser
0    POINT (1 2)
1    POINT (2 1)
dtype: geometry
>>> arrow_array = gser.to_arrow()
>>> arrow_array
<geopandas.io._geoarrow.GeoArrowArray object at ...>

The returned array object needs to be consumed by a library implementing the Arrow PyCapsule Protocol. For example, wrapping the data as a pyarrow.Array (requires pyarrow >= 14.0):

>>> import pyarrow as pa
>>> array = pa.array(arrow_array)
>>> array
GeometryExtensionArray:WkbType(geoarrow.wkb)[2]
<POINT (1 2)>
<POINT (2 1)>
clip(mask, keep_geom_type: bool = False, sort=False) GeoSeries

Clip points, lines, or polygon geometries to the mask extent.

Both layers must be in the same Coordinate Reference System (CRS). The GeoSeries will be clipped to the full extent of the mask object.

If there are multiple polygons in mask, data from the GeoSeries will be clipped to the total boundary of all polygons in mask.

Parameters

maskGeoDataFrame, GeoSeries, (Multi)Polygon, list-like

Polygon vector layer used to clip gdf. The mask’s geometry is dissolved into one geometric feature and intersected with GeoSeries. If the mask is list-like with four elements (minx, miny, maxx, maxy), clip will use a faster rectangle clipping (clip_by_rect()), possibly leading to slightly different results.

keep_geom_typeboolean, default False

If True, return only geometries of original type in case of intersection resulting in multiple geometry types or GeometryCollections. If False, return all resulting geometries (potentially mixed-types).

sortboolean, default False

If True, the order of rows in the clipped GeoSeries will be preserved at small performance cost. If False the order of rows in the clipped GeoSeries will be random.

Returns

GeoSeries

Vector data (points, lines, polygons) from gdf clipped to polygon boundary from mask.

See Also

clip : top-level function for clip

Examples

Clip points (grocery stores) with polygons (the Near West Side community):

>>> import geodatasets
>>> chicago = geopandas.read_file(
...     geodatasets.get_path("geoda.chicago_health")
... )
>>> near_west_side = chicago[chicago["community"] == "NEAR WEST SIDE"]
>>> groceries = geopandas.read_file(
...     geodatasets.get_path("geoda.groceries")
... ).to_crs(chicago.crs)
>>> groceries.shape
(148, 8)
>>> nws_groceries = groceries.geometry.clip(near_west_side)
>>> nws_groceries.shape
(7,)
vibespatial.list_layers(filename) pandas.DataFrame

List layers available in a file.

Provides an overview of layers available in a file or URL together with their geometry types. When supported by the data source, this includes both spatial and non-spatial layers. Non-spatial layers are indicated by the "geometry_type" column being None. GeoPandas will not read such layers but they can be read into a pd.DataFrame using pyogrio.read_dataframe().

Parameters

filenamestr, path object or file-like object

Either the absolute or relative path to the file or URL to be opened, or any object with a read() method (such as an open file or StringIO)

Returns

pandas.DataFrame

A DataFrame with columns “name” and “geometry_type” and one row per layer.

vibespatial.points_from_xy(x: numpy.typing.ArrayLike, y: numpy.typing.ArrayLike, z: numpy.typing.ArrayLike = None, crs: Any | None = None) GeometryArray

Generate GeometryArray of shapely Point geometries from x, y(, z) coordinates.

In case of geographic coordinates, it is assumed that longitude is captured by x coordinates and latitude by y.

Parameters

x, y, z : iterable crs : value, optional

Coordinate Reference System of the geometry objects. Can be anything accepted by pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.

Examples

>>> import pandas as pd
>>> df = pd.DataFrame({'x': [0, 1, 2], 'y': [0, 1, 2], 'z': [0, 1, 2]})
>>> df
   x  y  z
0  0  0  0
1  1  1  1
2  2  2  2
>>> geometry = geopandas.points_from_xy(x=[1, 0], y=[0, 1])
>>> geometry = geopandas.points_from_xy(df['x'], df['y'], df['z'])
>>> gdf = geopandas.GeoDataFrame(
...     df, geometry=geopandas.points_from_xy(df['x'], df['y']))

Having geographic coordinates:

>>> df = pd.DataFrame({'longitude': [-140, 0, 123], 'latitude': [-65, 1, 48]})
>>> df
   longitude  latitude
0       -140       -65
1          0         1
2        123        48
>>> geometry = geopandas.points_from_xy(df.longitude, df.latitude, crs="EPSG:4326")

Returns

output : GeometryArray

vibespatial.read_feather(path, columns=None, to_pandas_kwargs=None, **kwargs)

Load a Feather object from the file path, returning a GeoDataFrame.

You can read a subset of columns in the file using the columns parameter. However, the structure of the returned GeoDataFrame will depend on which columns you read:

  • if no geometry columns are read, this will raise a ValueError - you should use the pandas read_feather method instead.

  • if the primary geometry column saved to this file is not included in columns, the first available geometry column will be set as the geometry column of the returned GeoDataFrame.

Supports versions 0.1.0, 0.4.0, 1.0.0 and 1.1.0 of the GeoParquet specification at: https://github.com/opengeospatial/geoparquet

If ‘crs’ key is not present in the Feather metadata associated with the Parquet object, it will default to “OGC:CRS84” according to the specification.

Requires ‘pyarrow’ >= 0.17.

Added in version 0.8.

Parameters

pathstr, path object or file-like object

String, path object (implementing os.PathLike[str]) or file-like object implementing a binary read() function.

columnslist-like of strings, default=None

If not None, only these columns will be read from the file. If the primary geometry column is not included, the first secondary geometry read from the file will be set as the geometry column of the returned GeoDataFrame. If no geometry columns are present, a ValueError will be raised.

to_pandas_kwargsdict, optional

Arguments passed to the pa.Table.to_pandas method for non-geometry columns. This can be used to control the behavior of the conversion of the non-geometry columns to a pandas DataFrame. For example, you can use this to control the dtype conversion of the columns. By default, the to_pandas method is called with no additional arguments.

**kwargs

Any additional kwargs passed to pyarrow.feather.read_table().

Returns

GeoDataFrame

Examples

>>> df = geopandas.read_feather("data.feather")

Specifying columns to read:

>>> df = geopandas.read_feather(
...     "data.feather",
...     columns=["geometry", "pop_est"]
... )

See the read_parquet docs for examples of reading and writing to/from bytes objects.

vibespatial.read_file(filename, bbox=None, mask=None, columns=None, rows=None, engine=None, *, target_crs: str | None = None, build_index: bool = False, **kwargs)

Read a spatial file into a GeoDataFrame.

Supports GeoParquet, Feather/Arrow, Shapefile, GeoPackage, File Geodatabase, FlatGeobuf, GeoJSON, GeoJSON-Seq, GML, GPX, TopoJSON, WKT, CSV, KML, OSM PBF, and any format readable by pyogrio/fiona.

GPU acceleration is automatic for GeoJSON, Shapefile, FlatGeobuf, WKT, CSV, KML, and OSM PBF formats. GeoJSON and Shapefile auto-routing now optimize for pipeline shape rather than isolated read latency: eligible unfiltered reads prefer the repo-owned native ingest path so downstream GPU work does not immediately pay a host-to-device promotion. FlatGeobuf now follows the same policy for eligible local unfiltered reads, using the repo-owned direct FlatBuffer decoder by default. CSV and KML now try the repo-owned GPU parser for eligible local unfiltered reads instead of demoting solely because of a static file-size gate. WKT and full-data OSM PBF reads use the native GPU path. Standard OSM layers (points, lines, multipolygons) may use the pyogrio compatibility path when the native all-data parser is not required.

mask now also stays on the shared native Arrow/WKB boundary for the promoted pyogrio-backed vector containers when the request shape stays compatible. bbox, columns, and rows continue to work on that same boundary. Explicit engine="pyogrio" stays on the repo-owned native boundary for GeoJSON, Shapefile, and the promoted vector containers whose public semantics already match that boundary. Public automatic Shapefile reads prefer the direct SHP pipeline, while explicit engine="pyogrio" Shapefile reads stay on the shared Arrow/WKB bridge.

Aliased as vibespatial.read_file().

Parameters

filenamestr or Path

Path to the vector file.

bboxtuple of (minx, miny, maxx, maxy), optional

Spatial filter bounding box. Disables the GPU fast path.

maskGeometry or GeoDataFrame, optional

Spatial filter mask geometry. Promoted pyogrio-backed vector containers keep this on the shared native Arrow/WKB boundary when the request shape is compatible; other formats still use the compatibility path.

columnslist of str, optional

Subset of columns to read. Disables the GPU fast path.

rowsint or slice, optional

Subset of rows to read. Disables the GPU fast path.

enginestr, optional

Force a specific I/O engine ("pyogrio" or "fiona"). Disables GPU auto-routing.

target_crsstr, optional

Target CRS to reproject coordinates into (e.g. "EPSG:3857"). When the GPU path is used, the reprojection is fused with ingest via vibeProj GPU transform (no separate pass required). When the CPU path is used, the result is reprojected via gdf.to_crs() as a post-read step. For formats without an embedded CRS (WKT, CSV, KML, OSM PBF), the target CRS is set as a label without reprojection.

build_indexbool, default False

When True and the GPU path is used, build a GPU-resident packed Hilbert R-tree spatial index fused with ingest. The index is accessible via the GeoDataFrame.gpu_spatial_index property.

**kwargs

Passed through to the underlying engine. For OSM PBF GPU reads, the repo-owned path also accepts:

  • tags: True, False, or "ways" to control tag decode

  • geometry_only: skip tag and ID export for geometry-only reads

  • layer: "points", "lines", "multipolygons", "ways", "relations", "multilinestrings", "other_relations", or "all"

Returns

GeoDataFrame

vibespatial.read_parquet(path, *, columns=None, storage_options=None, bbox=None, to_pandas_kwargs=None, **kwargs)

Read a GeoParquet file into a GeoDataFrame.

When PyArrow is available the reader plans row-group selection from spatial metadata, keeps the table columnar through scan/decode, and only materializes a GeoDataFrame at the terminal public read boundary.

Aliased as vibespatial.read_parquet().

Parameters

pathstr or Path

Path to the GeoParquet file.

columnslist of str, optional

Subset of columns to read.

storage_optionsdict, optional

Storage options for fsspec-compatible filesystems.

bboxtuple of (minx, miny, maxx, maxy), optional

Spatial filter bounding box for row-group pruning.

to_pandas_kwargsdict, optional

Extra keyword arguments passed to pyarrow.Table.to_pandas().

**kwargs

Passed through to the underlying Parquet reader.

Returns

GeoDataFrame

vibespatial.options
vibespatial.sjoin_nearest(left_df: vibespatial.api.GeoDataFrame, right_df: vibespatial.api.GeoDataFrame, how: str = 'inner', max_distance: float | None = None, lsuffix: str = 'left', rsuffix: str = 'right', distance_col: str | None = None, exclusive: bool = False) vibespatial.api.GeoDataFrame

Spatial join of two GeoDataFrames based on the distance between their geometries.

Results will include multiple output records for a single input record where there are multiple equidistant nearest or intersected neighbors.

Distance is calculated in CRS units and can be returned using the distance_col parameter.

See the User Guide page https://geopandas.readthedocs.io/en/latest/docs/user_guide/mergingdata.html for more details.

Parameters

left_df, right_df : GeoDataFrames how : string, default ‘inner’

The type of join:

  • ‘left’: use keys from left_df; retain only left_df geometry column

  • ‘right’: use keys from right_df; retain only right_df geometry column

  • ‘inner’: use intersection of keys from both dfs; retain only left_df geometry column

max_distancefloat, default None

Maximum distance within which to query for nearest geometry. Must be greater than 0. The max_distance used to search for nearest items in the tree may have a significant impact on performance by reducing the number of input geometries that are evaluated for nearest items in the tree.

lsuffixstring, default ‘left’

Suffix to apply to overlapping column names (left GeoDataFrame).

rsuffixstring, default ‘right’

Suffix to apply to overlapping column names (right GeoDataFrame).

distance_colstring, default None

If set, save the distances computed between matching geometries under a column of this name in the joined GeoDataFrame.

exclusivebool, default False

If True, the nearest geometries that are equal to the input geometry will not be returned, default False.

Examples

>>> import geodatasets
>>> groceries = geopandas.read_file(
...     geodatasets.get_path("geoda.groceries")
... )
>>> chicago = geopandas.read_file(
...     geodatasets.get_path("geoda.chicago_health")
... ).to_crs(groceries.crs)
>>> chicago.head()
   ComAreaID  ...                                           geometry
0         35  ...  POLYGON ((-87.60914 41.84469, -87.60915 41.844...
1         36  ...  POLYGON ((-87.59215 41.81693, -87.59231 41.816...
2         37  ...  POLYGON ((-87.62880 41.80189, -87.62879 41.801...
3         38  ...  POLYGON ((-87.60671 41.81681, -87.60670 41.816...
4         39  ...  POLYGON ((-87.59215 41.81693, -87.59215 41.816...
[5 rows x 87 columns]
>>> groceries.head()
   OBJECTID     Ycoord  ...  Category                           geometry
0        16  41.973266  ...       NaN  MULTIPOINT ((-87.65661 41.97321))
1        18  41.696367  ...       NaN  MULTIPOINT ((-87.68136 41.69713))
2        22  41.868634  ...       NaN  MULTIPOINT ((-87.63918 41.86847))
3        23  41.877590  ...       new  MULTIPOINT ((-87.65495 41.87783))
4        27  41.737696  ...       NaN  MULTIPOINT ((-87.62715 41.73623))
[5 rows x 8 columns]
>>> groceries_w_communities = geopandas.sjoin_nearest(groceries, chicago)
>>> groceries_w_communities[["Chain", "community", "geometry"]].head(2)
               Chain    community                                geometry
0     VIET HOA PLAZA       UPTOWN   MULTIPOINT ((1168268.672 1933554.35))
1  COUNTY FAIR FOODS  MORGAN PARK  MULTIPOINT ((1162302.618 1832900.224))

To include the distances:

>>> groceries_w_communities = geopandas.sjoin_nearest(groceries, chicago, distance_col="distances")
>>> groceries_w_communities[["Chain", "community", "distances"]].head(2)
               Chain    community  distances
0     VIET HOA PLAZA       UPTOWN        0.0
1  COUNTY FAIR FOODS  MORGAN PARK        0.0

In the following example, we get multiple groceries for Uptown because all results are equidistant (in this case zero because they intersect). In fact, we get 4 results in total:

>>> chicago_w_groceries = geopandas.sjoin_nearest(groceries, chicago, distance_col="distances", how="right")
>>> uptown_results = chicago_w_groceries[chicago_w_groceries["community"] == "UPTOWN"]
>>> uptown_results[["Chain", "community"]]
            Chain community
30  VIET HOA PLAZA    UPTOWN
30      JEWEL OSCO    UPTOWN
30          TARGET    UPTOWN
30       Mariano's    UPTOWN

See Also

sjoin : binary predicate joins GeoDataFrame.sjoin_nearest : equivalent method

Notes

Since this join relies on distances, results will be inaccurate if your geometries are in a geographic CRS.

Every operation in GeoPandas is planar, i.e. the potential third dimension is not taken into account.

class vibespatial.RectClipBenchmark
dataset: str
rows: int
candidate_rows: int
fast_rows: int
fallback_rows: int
owned_elapsed_seconds: float
shapely_elapsed_seconds: float
property speedup_vs_shapely: float
class vibespatial.RectClipResult(*, geometries: numpy.ndarray | None = None, geometries_factory: object | None = None, row_count: int, candidate_rows: numpy.ndarray | None = None, candidate_rows_factory: object | None = None, fast_rows: numpy.ndarray | None = None, fast_rows_factory: object | None = None, fallback_rows: numpy.ndarray | None = None, fallback_rows_factory: object | None = None, runtime_selection: vibespatial.runtime.RuntimeSelection, precision_plan: vibespatial.runtime.precision.PrecisionPlan, robustness_plan: vibespatial.runtime.robustness.RobustnessPlan, owned_result: vibespatial.geometry.owned.OwnedGeometryArray | None = None, owned_result_rows: numpy.ndarray | None = None, owned_result_rows_factory: object | None = None)

Result of a rectangle clip operation.

geometries is lazily materialized from owned_result when accessed for the first time on the GPU point path, avoiding D->H->Shapely overhead unless a caller actually needs Shapely objects.

row_count
runtime_selection
precision_plan
robustness_plan
owned_result = None
property geometries: numpy.ndarray
property candidate_rows: numpy.ndarray
property fast_rows: numpy.ndarray
property fallback_rows: numpy.ndarray
property owned_result_rows: numpy.ndarray | None
vibespatial.benchmark_clip_by_rect(values: collections.abc.Sequence[object | None] | numpy.ndarray | vibespatial.geometry.owned.OwnedGeometryArray, xmin: float, ymin: float, xmax: float, ymax: float, *, dataset: str) RectClipBenchmark
vibespatial.clip_by_rect_owned(values: collections.abc.Sequence[object | None] | numpy.ndarray | vibespatial.geometry.owned.OwnedGeometryArray, xmin: float, ymin: float, xmax: float, ymax: float, *, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.AUTO, precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO) RectClipResult
vibespatial.evaluate_geopandas_clip_by_rect(values: numpy.ndarray, xmin: float, ymin: float, xmax: float, ymax: float, *, prebuilt_owned: vibespatial.geometry.owned.OwnedGeometryArray | None = None) tuple[vibespatial.geometry.owned.OwnedGeometryArray | numpy.ndarray | None, vibespatial.runtime.ExecutionMode]
class vibespatial.GPURepairResult

Result of GPU make_valid repair.

repaired_owned: vibespatial.geometry.owned.OwnedGeometryArray | None
repaired_count: int
gpu_phases_used: tuple[str, Ellipsis]
still_invalid_rows: numpy.ndarray
vibespatial.gpu_repair_invalid_polygons(owned: vibespatial.geometry.owned.OwnedGeometryArray, invalid_rows: numpy.ndarray, geometries: numpy.ndarray | None = None, *, method: str = 'linework', keep_collapsed: bool = True) GPURepairResult | None

GPU-resident batch repair of invalid polygon geometries (Phase 16).

Implements the full make_valid pipeline on GPU with batch processing: 1. Collect all invalid polygon coordinates into one contiguous batch 2. Phase B: Close rings, remove duplicates, fix orientation (batched) 3. Phase A+C: Detect and split self-intersections (batched) 4. Phase D: Re-polygonize via overlay half-edge/face-walk pipeline (batched) 5. Merge repaired rows back into original owned array on device

When owned.device_state is available, the entire pipeline stays device-resident — no D->H coordinate transfers. The result carries repaired_owned so callers can stay on device (ADR-0005).

Returns None if GPU repair is not applicable (no GPU, no polygon families, or CuPy not available).

Parameters

owned : OwnedGeometryArray with device_state invalid_rows : indices of invalid rows to repair geometries : optional shapely geometry array (unused in device path) method : repair method (only “linework” supported on GPU) keep_collapsed : whether to keep collapsed geometries

class vibespatial.MakeValidBenchmark
dataset: str
rows: int
repaired_rows: int
compact_elapsed_seconds: float
baseline_elapsed_seconds: float
property speedup_vs_baseline: float
class vibespatial.MakeValidPlan
method: str
keep_collapsed: bool
stages: tuple[MakeValidStage, Ellipsis]
fusion_steps: tuple[vibespatial.runtime.fusion.PipelineStep, Ellipsis]
reason: str
class vibespatial.MakeValidPrimitive

Enum where members are also (and must be) strings

VALIDITY_MASK = 'validity_mask'
COMPACT_INVALID = 'compact_invalid'
SEGMENTIZE_INVALID = 'segmentize_invalid'
POLYGONIZE_REPAIR = 'polygonize_repair'
SCATTER_REPAIRED = 'scatter_repaired'
EMIT_GEOMETRY = 'emit_geometry'
class vibespatial.MakeValidResult
row_count: int
valid_rows: numpy.ndarray
repaired_rows: numpy.ndarray
null_rows: numpy.ndarray
method: str
keep_collapsed: bool
owned: object | None = None
selected: vibespatial.runtime.ExecutionMode
property geometries: numpy.ndarray

Lazily materialize Shapely geometries from owned array (ADR-0005).

When the GPU repair path produces a device-resident result, Shapely objects are only created when a caller actually accesses this property, avoiding a D->H transfer for callers that consume .owned directly.

class vibespatial.MakeValidStage
name: str
primitive: MakeValidPrimitive
purpose: str
inputs: tuple[str, Ellipsis]
outputs: tuple[str, Ellipsis]
cccl_mapping: tuple[str, Ellipsis]
disposition: vibespatial.runtime.fusion.IntermediateDisposition
geometry_producing: bool = False
vibespatial.benchmark_make_valid(values, *, method: str = 'linework', keep_collapsed: bool = True, dataset: str = 'make-valid', owned=None)
vibespatial.evaluate_geopandas_make_valid(values, *, method: str = 'linework', keep_collapsed: bool = True, prebuilt_owned=None) MakeValidResult

Run make_valid and return the full MakeValidResult.

Returns MakeValidResult so callers can access .owned for device-resident fast paths and .selected for dispatch event accuracy.

vibespatial.fusion_plan_for_make_valid(*, method: str = 'linework', keep_collapsed: bool = True)
vibespatial.make_valid_owned(values=None, *, method: str = 'linework', keep_collapsed: bool = True, owned=None, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.AUTO) MakeValidResult

Validate and repair geometries using compact-invalid-row pattern (ADR-0019).

Parameters

valuesarray-like of shapely geometries, optional

When owned is provided, values may be None – Shapely objects will only be materialized if GPU validity checks find invalid rows that require repair (lazy materialization per ADR-0005).

method : repair method (“linework” or “structure”) keep_collapsed : whether to keep collapsed geometries owned : optional pre-built OwnedGeometryArray (avoids shapely->owned conversion

when data is already device-resident, eliminating D->H transfer for the validity check per ADR-0005)

dispatch_mode : requested execution mode (AUTO/GPU/CPU)

vibespatial.plan_make_valid_pipeline(*, method: str = 'linework', keep_collapsed: bool = True) MakeValidPlan
class vibespatial.BufferKernelResult(*, geometries: numpy.ndarray | None = None, row_count: int, fast_rows: numpy.ndarray, fallback_rows: numpy.ndarray, owned_result: vibespatial.geometry.owned.OwnedGeometryArray | None = None)

Result of a buffer kernel invocation.

When owned_result is set, geometries is materialized lazily on first access so that callers that stay on the device-resident path never pay for a D->H transfer.

row_count
fast_rows
fallback_rows
owned_result = None
property geometries: numpy.ndarray
class vibespatial.OffsetCurveKernelResult
geometries: numpy.ndarray
row_count: int
fast_rows: numpy.ndarray
fallback_rows: numpy.ndarray
owned_result: vibespatial.geometry.owned.OwnedGeometryArray | None = None
class vibespatial.StrokeBenchmark
dataset: str
rows: int
fast_rows: int
fallback_rows: int
owned_elapsed_seconds: float
shapely_elapsed_seconds: float
property speedup_vs_shapely: float
class vibespatial.StrokeKernelPlan
operation: StrokeOperation
stages: tuple[StrokeKernelStage, Ellipsis]
fusion_steps: tuple[vibespatial.runtime.fusion.PipelineStep, Ellipsis]
reason: str
class vibespatial.StrokeKernelStage
name: str
primitive: StrokePrimitive
purpose: str
inputs: tuple[str, Ellipsis]
outputs: tuple[str, Ellipsis]
cccl_mapping: tuple[str, Ellipsis]
disposition: vibespatial.runtime.fusion.IntermediateDisposition
geometry_producing: bool = False
class vibespatial.StrokeOperation

Enum where members are also (and must be) strings

BUFFER = 'buffer'
OFFSET_CURVE = 'offset_curve'
class vibespatial.StrokePrimitive

Enum where members are also (and must be) strings

EXPAND_DISTANCES = 'expand_distances'
EMIT_EDGE_FRAMES = 'emit_edge_frames'
CLASSIFY_VERTICES = 'classify_vertices'
EMIT_ARCS = 'emit_arcs'
PREFIX_SUM = 'prefix_sum'
SCATTER = 'scatter'
EMIT_GEOMETRY = 'emit_geometry'
vibespatial.benchmark_offset_curve(values, *, distance: float, join_style: str = 'mitre', dataset: str = 'offset-curve') StrokeBenchmark
vibespatial.benchmark_point_buffer(values, *, distance: float, quad_segs: int = 16, dataset: str = 'point-buffer') StrokeBenchmark
vibespatial.evaluate_geopandas_buffer(values, distance, *, quad_segs: int, cap_style, join_style, mitre_limit: float, single_sided: bool, prebuilt_owned=None)
vibespatial.evaluate_geopandas_offset_curve(values, distance, *, quad_segs: int, join_style, mitre_limit: float)
vibespatial.fusion_plan_for_stroke(operation: StrokeOperation | str)
vibespatial.offset_curve_owned(values: collections.abc.Sequence[object | None] | numpy.ndarray, distance, *, quad_segs: int = 8, join_style: str = 'round', mitre_limit: float = 5.0) OffsetCurveKernelResult
vibespatial.plan_stroke_kernel(operation: StrokeOperation | str) StrokeKernelPlan
vibespatial.point_buffer_owned(values: collections.abc.Sequence[object | None] | numpy.ndarray, distance, *, quad_segs: int = 16) BufferKernelResult
vibespatial.GEOMETRY_BUFFER_SCHEMAS: dict[GeometryFamily, GeometryBufferSchema]
class vibespatial.BufferKind

Enum where members are also (and must be) strings

VALIDITY = 'validity'
TAG = 'tag'
OFFSET = 'offset'
COORDINATE = 'coordinate'
BOUNDS = 'bounds'
class vibespatial.BufferSpec
name: str
kind: BufferKind
dtype: str
level: str
required: bool = True
description: str = ''
class vibespatial.GeometryBufferSchema
family: GeometryFamily
coord_precision: vibespatial.runtime.precision.PrecisionMode
coord_layout: str
validity: BufferSpec
x: BufferSpec
y: BufferSpec
geometry_offsets: BufferSpec | None = None
part_offsets: BufferSpec | None = None
ring_offsets: BufferSpec | None = None
bounds: BufferSpec | None = None
supports_mixed_parent: bool = True
empty_via_zero_span: bool = True
notes: tuple[str, Ellipsis] = ()
property coordinate_buffers: tuple[BufferSpec, BufferSpec]
property offset_buffers: tuple[BufferSpec, Ellipsis]
class vibespatial.GeometryFamily

Enum where members are also (and must be) strings

POINT = 'point'
LINESTRING = 'linestring'
POLYGON = 'polygon'
MULTIPOINT = 'multipoint'
MULTILINESTRING = 'multilinestring'
MULTIPOLYGON = 'multipolygon'
vibespatial.get_geometry_buffer_schema(family: GeometryFamily | str) GeometryBufferSchema
class vibespatial.BufferSharingMode

Enum where members are also (and must be) strings

COPY = 'copy'
SHARE = 'share'
AUTO = 'auto'
class vibespatial.DiagnosticEvent
kind: DiagnosticKind
detail: str
residency: vibespatial.runtime.residency.Residency
visible_to_user: bool = False
elapsed_seconds: float = 0.0
bytes_transferred: int = 0
class vibespatial.DiagnosticKind

Enum where members are also (and must be) strings

CREATED = 'created'
TRANSFER = 'transfer'
MATERIALIZATION = 'materialization'
RUNTIME = 'runtime'
CACHE = 'cache'
class vibespatial.FamilyGeometryBuffer
family: vibespatial.geometry.buffers.GeometryFamily
schema: vibespatial.geometry.buffers.GeometryBufferSchema
row_count: int
x: numpy.ndarray
y: numpy.ndarray
geometry_offsets: numpy.ndarray
empty_mask: numpy.ndarray
part_offsets: numpy.ndarray | None = None
ring_offsets: numpy.ndarray | None = None
bounds: numpy.ndarray | None = None
host_materialized: bool = True
class vibespatial.GeoArrowBufferView
family: vibespatial.geometry.buffers.GeometryFamily
x: numpy.ndarray
y: numpy.ndarray
geometry_offsets: numpy.ndarray
empty_mask: numpy.ndarray
part_offsets: numpy.ndarray | None = None
ring_offsets: numpy.ndarray | None = None
bounds: numpy.ndarray | None = None
shares_memory: bool = False
class vibespatial.MixedGeoArrowView
validity: numpy.ndarray
tags: numpy.ndarray
family_row_offsets: numpy.ndarray
families: dict[vibespatial.geometry.buffers.GeometryFamily, GeoArrowBufferView]
shares_memory: bool = False
class vibespatial.OwnedGeometryArray(validity: numpy.ndarray | None, tags: numpy.ndarray | None, family_row_offsets: numpy.ndarray | None, families: dict[vibespatial.geometry.buffers.GeometryFamily, FamilyGeometryBuffer], residency: vibespatial.runtime.residency.Residency = Residency.HOST, diagnostics: list[DiagnosticEvent] | None = None, runtime_history: list[vibespatial.runtime.RuntimeSelection] | None = None, geoarrow_backed: bool = False, shares_geoarrow_memory: bool = False, device_adopted: bool = False, device_state: OwnedGeometryDeviceState | None = None, _row_count: int | None = None)

Columnar geometry storage with optional device-resident metadata.

The three routing metadata arrays – validity, tags, and family_row_offsets – are exposed as properties. When the array is device-resident, the host numpy copies may be None internally; accessing any property lazily transfers from GPU to CPU, preserving full backward compatibility for host consumers while allowing GPU-only pipelines to avoid the D->H transfer entirely.

families
residency
diagnostics: list[DiagnosticEvent] = None
runtime_history: list[vibespatial.runtime.RuntimeSelection] = None
geoarrow_backed = False
shares_geoarrow_memory = False
device_adopted = False
device_state = None
property validity: numpy.ndarray
property tags: numpy.ndarray
property family_row_offsets: numpy.ndarray
property row_count: int
property is_indexed_view: bool

True when this array is a virtual indexed view over a compact base.

family_has_rows(family: vibespatial.geometry.buffers.GeometryFamily) bool

Check whether family has at least one geometry row to process.

Reads from whichever side is authoritative: device_state when populated, host FamilyGeometryBuffer otherwise. This avoids the bug where host stubs with host_materialized=False report empty offsets even when device buffers have real data.

move_to(target: vibespatial.runtime.residency.Residency | str, *, trigger: vibespatial.runtime.residency.TransferTrigger | str, reason: str | None = None) OwnedGeometryArray
record_runtime_selection(selection: vibespatial.runtime.RuntimeSelection) None
cache_bounds(bounds: numpy.ndarray) None
cache_device_bounds(family: vibespatial.geometry.buffers.GeometryFamily, bounds: vibespatial.cuda._runtime.DeviceArray) None
classmethod concat(arrays: list[OwnedGeometryArray]) OwnedGeometryArray

Concatenate multiple OwnedGeometryArrays at the buffer level.

When ALL inputs are device-resident (residency == DEVICE) and have device state populated, concatenation is performed entirely on GPU using CuPy – no D->H transfer occurs. The result is a device-resident OGA with lazy host stubs.

When ANY input is host-resident (or lacks device state), falls back to the existing host-side concatenation path.

diagnostics_report() dict[str, Any]
take(indices: numpy.ndarray) OwnedGeometryArray

Return a new OwnedGeometryArray containing only the rows at indices.

Operates entirely at the buffer level – no Shapely round-trip. When the array is DEVICE-resident or indices are already on device (CuPy / __cuda_array_interface__), dispatches to device_take() to keep all gathering on GPU. Otherwise returns a HOST-resident array.

When the indices have high repetition (many output rows mapping to few unique source rows), returns a virtual indexed view that stores only the unique rows and an index map, avoiding the physical coordinate copy. This is transparent to consumers: kernel dispatch triggers _resolve(), and to_shapely() expands via cheap Python object references.

Memory pressure is handled by the ADR-0040 tiered allocator: Tier B (default) retries with gc.collect on OOM; Tier C (opt-in) uses CUDA managed memory for datasets exceeding VRAM.

device_take(indices) OwnedGeometryArray

Device-side take — all gathering stays on GPU.

Accepts numpy or CuPy indices/mask. Returns a DEVICE-resident OwnedGeometryArray with host buffers marked host_materialized=False. The host side is lazily populated by _ensure_host_state() on demand.

When indices have high repetition, returns a virtual indexed view instead of performing a full device gather. See take() for the design rationale.

to_shapely() list[object | None]
to_wkb(*, hex: bool = False) list[bytes | str | None]
to_geoarrow(*, sharing: BufferSharingMode | str = BufferSharingMode.COPY) MixedGeoArrowView
vibespatial.from_geoarrow(view: MixedGeoArrowView, *, residency: vibespatial.runtime.residency.Residency = Residency.HOST, sharing: BufferSharingMode | str = BufferSharingMode.COPY) OwnedGeometryArray
vibespatial.from_shapely_geometries(geometries: list[object | None] | tuple[object | None, Ellipsis], *, residency: vibespatial.runtime.residency.Residency = Residency.HOST) OwnedGeometryArray
vibespatial.from_wkb(values: list[bytes | str | None] | tuple[bytes | str | None, Ellipsis], *, on_invalid: str = 'raise', residency: vibespatial.runtime.residency.Residency = Residency.HOST) OwnedGeometryArray
class vibespatial.GeoArrowBridgeBenchmark
operation: str
sharing: str
geometry_type: str
rows: int
elapsed_seconds: float
shares_memory: bool
class vibespatial.GeoArrowCodecPlan
operation: vibespatial.io.support.IOOperation
selected_path: vibespatial.io.support.IOPathKind
canonical_gpu: bool
device_codec_available: bool
zero_copy_adoption: bool
lazy_materialization: bool
reason: str
class vibespatial.GeoParquetChunkPlan
chunk_index: int
row_groups: tuple[int, Ellipsis] | None
estimated_rows: int
class vibespatial.GeoParquetEngineBenchmark
backend: str
geometry_encoding: str
rows: int
chunk_rows: int | None
chunk_count: int
elapsed_seconds: float
rows_per_second: float
planning_elapsed_seconds: float = 0.0
scan_elapsed_seconds: float = 0.0
decode_elapsed_seconds: float = 0.0
concat_elapsed_seconds: float = 0.0
class vibespatial.GeoParquetEnginePlan
selected_path: vibespatial.io.support.IOPathKind
backend: str
geometry_encoding: str | None
chunk_count: int
target_chunk_rows: int | None
uses_row_group_pruning: bool
reason: str
class vibespatial.GeoParquetScanPlan
selected_path: vibespatial.io.support.IOPathKind
canonical_gpu: bool
uses_pylibcudf: bool
bbox_requested: bool
metadata_summary_available: bool
metadata_source: str | None
uses_covering_bbox: bool
uses_point_encoding_pushdown: bool
row_group_pushdown: bool
planner_strategy: str
available_row_groups: int | None
selected_row_groups: tuple[int, Ellipsis] | None
decoded_row_fraction_estimate: float | None
pruned_row_group_fraction: float | None
reason: str
class vibespatial.NativeGeometryBenchmark
operation: str
geometry_type: str
implementation: str
rows: int
elapsed_seconds: float
rows_per_second: float
class vibespatial.WKBBridgeBenchmark
operation: str
geometry_type: str
implementation: str
rows: int
fallback_rows: int
elapsed_seconds: float
rows_per_second: float
class vibespatial.WKBBridgePlan
operation: vibespatial.io.support.IOOperation
selected_path: vibespatial.io.support.IOPathKind
canonical_gpu: bool
device_codec_available: bool
reason: str
vibespatial.benchmark_geoarrow_bridge(*, operation: str, geometry_type: str = 'point', rows: int = 100000, repeat: int = 20, seed: int = 0) list[GeoArrowBridgeBenchmark]
vibespatial.benchmark_geoparquet_scan_engine(*, geometry_type: str = 'point', rows: int = 100000, geometry_encoding: str = 'geoarrow', chunk_rows: int | None = None, compression: str | None = None, backend: str = 'cpu', repeat: int = 5, seed: int = 0) GeoParquetEngineBenchmark
vibespatial.benchmark_native_geometry_codec(*, operation: str, geometry_type: str = 'point', rows: int = 100000, repeat: int = 5, seed: int = 0) list[NativeGeometryBenchmark]
vibespatial.benchmark_wkb_bridge(*, operation: str, geometry_type: str = 'point', rows: int = 100000, repeat: int = 5, seed: int = 0) list[WKBBridgeBenchmark]
vibespatial.decode_owned_geoarrow(view: vibespatial.geometry.owned.MixedGeoArrowView) vibespatial.geometry.owned.OwnedGeometryArray
vibespatial.decode_wkb_owned(values: list[bytes | str | None] | tuple[bytes | str | None, Ellipsis], *, on_invalid: str = 'raise') vibespatial.geometry.owned.OwnedGeometryArray
vibespatial.encode_owned_geoarrow(array: vibespatial.geometry.owned.OwnedGeometryArray) vibespatial.geometry.owned.MixedGeoArrowView
vibespatial.encode_owned_geoarrow_array(array: vibespatial.geometry.owned.OwnedGeometryArray, *, field_name: str = 'geometry', crs: Any | None = None, interleaved: bool = True)
vibespatial.encode_wkb_owned(array: vibespatial.geometry.owned.OwnedGeometryArray, *, hex: bool = False) list[bytes | str | None]
vibespatial.geodataframe_from_arrow(table, *, geometry: str | None = None, to_pandas_kwargs: dict | None = None)
vibespatial.geodataframe_to_arrow(df, *, index: bool | None = None, geometry_encoding: str = 'WKB', interleaved: bool = True, include_z: bool | None = None)
vibespatial.geoseries_from_arrow(arr, **kwargs)
vibespatial.geoseries_from_owned(array: vibespatial.geometry.owned.OwnedGeometryArray, *, name: str = 'geometry', crs: Any | None = None, interleaved: bool = True, use_device_array: bool = True, **kwargs)
vibespatial.geoseries_to_arrow(series, *, geometry_encoding: str = 'WKB', interleaved: bool = True, include_z: bool | None = None)
vibespatial.has_pyarrow_support() bool
vibespatial.has_pylibcudf_support() bool
vibespatial.plan_geoarrow_codec(operation: vibespatial.io.support.IOOperation | str) GeoArrowCodecPlan
vibespatial.plan_geoparquet_engine(*, geo_metadata: dict[str, Any] | None, scan_plan: GeoParquetScanPlan, chunk_plans: tuple[GeoParquetChunkPlan, Ellipsis], target_chunk_rows: int | None, read_plan: GeoParquetReadBackendPlan) GeoParquetEnginePlan
vibespatial.plan_geoparquet_scan(*, bbox: tuple[float, float, float, float] | None = None, geo_metadata: dict[str, Any] | None = None, metadata_summary: vibespatial.io.geoparquet_planner.GeoParquetMetadataSummary | None = None, planner_strategy: str = 'auto') GeoParquetScanPlan
vibespatial.plan_wkb_bridge(operation: vibespatial.io.support.IOOperation | str) WKBBridgePlan
vibespatial.plan_wkb_partition(values: list[bytes | str | None] | tuple[bytes | str | None, Ellipsis]) WKBPartitionPlan
vibespatial.read_geoparquet(path, *, columns=None, storage_options=None, bbox=None, to_pandas_kwargs=None, **kwargs)

Read a GeoParquet file into a GeoDataFrame.

When PyArrow is available the reader plans row-group selection from spatial metadata, keeps the table columnar through scan/decode, and only materializes a GeoDataFrame at the terminal public read boundary.

Aliased as vibespatial.read_parquet().

Parameters

pathstr or Path

Path to the GeoParquet file.

columnslist of str, optional

Subset of columns to read.

storage_optionsdict, optional

Storage options for fsspec-compatible filesystems.

bboxtuple of (minx, miny, maxx, maxy), optional

Spatial filter bounding box for row-group pruning.

to_pandas_kwargsdict, optional

Extra keyword arguments passed to pyarrow.Table.to_pandas().

**kwargs

Passed through to the underlying Parquet reader.

Returns

GeoDataFrame

vibespatial.read_geoparquet_native(path, *, columns=None, storage_options=None, bbox=None, chunk_rows: int | None = None, backend: str = 'auto', to_pandas_kwargs=None, **kwargs) vibespatial.api._native_results.NativeTabularResult

Read a GeoParquet file into the shared native tabular result boundary.

vibespatial.read_geoparquet_owned(path, *, columns=None, storage_options=None, bbox=None, chunk_rows: int | None = None, backend: str = 'auto', **kwargs) vibespatial.geometry.owned.OwnedGeometryArray
vibespatial.write_geoparquet(df, path, *, index: bool | None = None, compression: str | None = 'snappy', geometry_encoding: str = 'WKB', schema_version: str | None = None, write_covering_bbox: bool = False, **kwargs) None
class vibespatial.ShapefileIngestBenchmark
implementation: str
geometry_type: str
rows: int
elapsed_seconds: float
rows_per_second: float
class vibespatial.ShapefileIngestPlan
implementation: str
selected_strategy: str
uses_pyogrio_container: bool
uses_arrow_batch: bool
uses_native_wkb_decode: bool
reason: str
class vibespatial.ShapefileOwnedBatch
geometry: vibespatial.geometry.owned.OwnedGeometryArray
attributes_table: object
metadata: dict[str, object]
class vibespatial.VectorFilePlan
format: vibespatial.io.support.IOFormat
operation: vibespatial.io.support.IOOperation
selected_path: vibespatial.io.support.IOPathKind
driver: str
implementation: str
reason: str
vibespatial.benchmark_shapefile_ingest(*, geometry_type: str = 'point', rows: int = 100000, repeat: int = 5, seed: int = 0) list[ShapefileIngestBenchmark]
vibespatial.plan_shapefile_ingest(*, prefer: str = 'auto') ShapefileIngestPlan
vibespatial.plan_vector_file_io(filename, *, operation: vibespatial.io.support.IOOperation | str, driver: str | None = None) VectorFilePlan
vibespatial.read_geojson_native(source: str | pathlib.Path, *, prefer: str = 'auto', objective: str = 'pipeline', track_properties: bool = True, target_crs: str | None = None)
vibespatial.read_shapefile_native(source: str | pathlib.Path, *, bbox=None, columns=None, rows=None, target_crs: str | None = None, **kwargs)
vibespatial.read_shapefile_owned(source: str | pathlib.Path, *, bbox=None, columns=None, rows=None, **kwargs) ShapefileOwnedBatch
vibespatial.read_vector_file(filename, bbox=None, mask=None, columns=None, rows=None, engine=None, *, target_crs: str | None = None, build_index: bool = False, **kwargs)

Read a spatial file into a GeoDataFrame.

Supports GeoParquet, Feather/Arrow, Shapefile, GeoPackage, File Geodatabase, FlatGeobuf, GeoJSON, GeoJSON-Seq, GML, GPX, TopoJSON, WKT, CSV, KML, OSM PBF, and any format readable by pyogrio/fiona.

GPU acceleration is automatic for GeoJSON, Shapefile, FlatGeobuf, WKT, CSV, KML, and OSM PBF formats. GeoJSON and Shapefile auto-routing now optimize for pipeline shape rather than isolated read latency: eligible unfiltered reads prefer the repo-owned native ingest path so downstream GPU work does not immediately pay a host-to-device promotion. FlatGeobuf now follows the same policy for eligible local unfiltered reads, using the repo-owned direct FlatBuffer decoder by default. CSV and KML now try the repo-owned GPU parser for eligible local unfiltered reads instead of demoting solely because of a static file-size gate. WKT and full-data OSM PBF reads use the native GPU path. Standard OSM layers (points, lines, multipolygons) may use the pyogrio compatibility path when the native all-data parser is not required.

mask now also stays on the shared native Arrow/WKB boundary for the promoted pyogrio-backed vector containers when the request shape stays compatible. bbox, columns, and rows continue to work on that same boundary. Explicit engine="pyogrio" stays on the repo-owned native boundary for GeoJSON, Shapefile, and the promoted vector containers whose public semantics already match that boundary. Public automatic Shapefile reads prefer the direct SHP pipeline, while explicit engine="pyogrio" Shapefile reads stay on the shared Arrow/WKB bridge.

Aliased as vibespatial.read_file().

Parameters

filenamestr or Path

Path to the vector file.

bboxtuple of (minx, miny, maxx, maxy), optional

Spatial filter bounding box. Disables the GPU fast path.

maskGeometry or GeoDataFrame, optional

Spatial filter mask geometry. Promoted pyogrio-backed vector containers keep this on the shared native Arrow/WKB boundary when the request shape is compatible; other formats still use the compatibility path.

columnslist of str, optional

Subset of columns to read. Disables the GPU fast path.

rowsint or slice, optional

Subset of rows to read. Disables the GPU fast path.

enginestr, optional

Force a specific I/O engine ("pyogrio" or "fiona"). Disables GPU auto-routing.

target_crsstr, optional

Target CRS to reproject coordinates into (e.g. "EPSG:3857"). When the GPU path is used, the reprojection is fused with ingest via vibeProj GPU transform (no separate pass required). When the CPU path is used, the result is reprojected via gdf.to_crs() as a post-read step. For formats without an embedded CRS (WKT, CSV, KML, OSM PBF), the target CRS is set as a label without reprojection.

build_indexbool, default False

When True and the GPU path is used, build a GPU-resident packed Hilbert R-tree spatial index fused with ingest. The index is accessible via the GeoDataFrame.gpu_spatial_index property.

**kwargs

Passed through to the underlying engine. For OSM PBF GPU reads, the repo-owned path also accepts:

  • tags: True, False, or "ways" to control tag decode

  • geometry_only: skip tag and ID export for geometry-only reads

  • layer: "points", "lines", "multipolygons", "ways", "relations", "multilinestrings", "other_relations", or "all"

Returns

GeoDataFrame

vibespatial.read_vector_file_native(filename, bbox=None, mask=None, columns=None, rows=None, engine=None, *, target_crs: str | None = None, **kwargs)

Read a spatial file into the shared native tabular boundary.

vibespatial.write_vector_file(df, filename, driver=None, schema=None, index=None, **kwargs)
class vibespatial.GeoJSONIngestBenchmark
implementation: str
geometry_type: str
rows: int
elapsed_seconds: float
rows_per_second: float
class vibespatial.GeoJSONIngestPlan
implementation: str
selected_strategy: str
objective: str
uses_stream_tokenizer: bool
uses_pylibcudf: bool
uses_native_geometry_assembly: bool
reason: str
class vibespatial.GeoJSONOwnedBatch
geometry: vibespatial.geometry.owned.OwnedGeometryArray
property properties: list[dict[str, object]]
without_properties() GeoJSONOwnedBatch
vibespatial.benchmark_geojson_ingest(*, geometry_type: str = 'point', rows: int = 100000, repeat: int = 5, seed: int = 0) list[GeoJSONIngestBenchmark]
vibespatial.plan_geojson_ingest(*, prefer: str = 'auto', objective: str = 'pipeline') GeoJSONIngestPlan
vibespatial.read_geojson_owned(source: str | bytes | bytearray | memoryview | pathlib.Path, *, prefer: str = 'auto', objective: str = 'pipeline', track_properties: bool = True) GeoJSONOwnedBatch
class vibespatial.GeoParquetMetadataSummary
source: str
row_group_rows: numpy.ndarray
xmin: numpy.ndarray
ymin: numpy.ndarray
xmax: numpy.ndarray
ymax: numpy.ndarray
source_paths: tuple[str, Ellipsis] | None = None
row_group_source_indices: numpy.ndarray | None = None
row_group_source_row_groups: numpy.ndarray | None = None
property row_group_count: int
property total_rows: int
class vibespatial.GeoParquetPlannerBenchmark
strategy: str
elapsed_seconds: float
selected_row_groups: int
decoded_row_fraction: float
pruned_row_group_fraction: float
class vibespatial.GeoParquetPruneResult
strategy: str
selected_row_groups: tuple[int, Ellipsis]
decoded_row_count: int
decoded_row_fraction: float
pruned_row_group_fraction: float
total_row_groups: int
total_rows: int
metadata_source: str
vibespatial.benchmark_geoparquet_planner(summary: GeoParquetMetadataSummary, bbox: BBox, *, repeat: int = 5) tuple[GeoParquetPlannerBenchmark, Ellipsis]
vibespatial.build_geoparquet_metadata_summary(*, source: str, row_group_rows: list[int] | tuple[int, Ellipsis] | numpy.ndarray, xmin: list[float] | tuple[float, Ellipsis] | numpy.ndarray, ymin: list[float] | tuple[float, Ellipsis] | numpy.ndarray, xmax: list[float] | tuple[float, Ellipsis] | numpy.ndarray, ymax: list[float] | tuple[float, Ellipsis] | numpy.ndarray, source_paths: list[str] | tuple[str, Ellipsis] | None = None, row_group_source_indices: list[int] | tuple[int, Ellipsis] | numpy.ndarray | None = None, row_group_source_row_groups: list[int] | tuple[int, Ellipsis] | numpy.ndarray | None = None) GeoParquetMetadataSummary
vibespatial.select_row_groups(summary: GeoParquetMetadataSummary, bbox: BBox, *, strategy: str = 'auto') GeoParquetPruneResult
vibespatial.IO_SUPPORT_MATRIX: dict[IOFormat, IOSupportEntry]
class vibespatial.IOFormat

Enum where members are also (and must be) strings

GEOARROW = 'geoarrow'
GEOPARQUET = 'geoparquet'
WKB = 'wkb'
WKT = 'wkt'
CSV = 'csv'
GEOJSON = 'geojson'
KML = 'kml'
SHAPEFILE = 'shapefile'
OSM_PBF = 'osm-pbf'
GEOPACKAGE = 'geopackage'
FILE_GEODATABASE = 'file-geodatabase'
FLATGEOBUF = 'flatgeobuf'
GML = 'gml'
GPX = 'gpx'
TOPOJSON = 'topojson'
GEOJSONSEQ = 'geojsonseq'
GDAL_LEGACY = 'gdal-legacy'
class vibespatial.IOOperation

Enum where members are also (and must be) strings

READ = 'read'
WRITE = 'write'
SCAN = 'scan'
DECODE = 'decode'
ENCODE = 'encode'
class vibespatial.IOPathKind

Enum where members are also (and must be) strings

GPU_NATIVE = 'gpu_native'
HYBRID = 'hybrid'
FALLBACK = 'fallback'
class vibespatial.IOPlan
format: IOFormat
operation: IOOperation
selected_path: IOPathKind
canonical_gpu: bool
reason: str
class vibespatial.IOSupportEntry
format: IOFormat
default_path: IOPathKind
read_path: IOPathKind
write_path: IOPathKind
canonical_gpu: bool
reason: str
vibespatial.plan_io_support(format: IOFormat | str, operation: IOOperation | str) IOPlan
vibespatial.compute_geometry_bounds(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray, *, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.AUTO, precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO)
vibespatial.compute_morton_keys(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray, *, dispatch_mode: vibespatial.runtime.ExecutionMode = ExecutionMode.CPU, precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO, bits: int = 16)
vibespatial.compute_offset_spans(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray, *, level: str = 'geometry', dispatch_mode: vibespatial.runtime.ExecutionMode = ExecutionMode.CPU) dict[vibespatial.geometry.buffers.GeometryFamily, object]
vibespatial.compute_total_bounds(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray, *, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.AUTO, precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO) tuple[float, float, float, float]
class vibespatial.BinaryPredicateResult
predicate: str
values: numpy.ndarray
row_count: int
candidate_rows: numpy.ndarray
coarse_true_rows: numpy.ndarray
coarse_false_rows: numpy.ndarray
runtime_selection: vibespatial.runtime.RuntimeSelection
precision_plan: vibespatial.runtime.precision.PrecisionPlan
robustness_plan: vibespatial.runtime.robustness.RobustnessPlan
class vibespatial.NullBehavior

Enum where members are also (and must be) strings

PROPAGATE = 'propagate'
FALSE = 'false'
vibespatial.benchmark_binary_predicate(predicate: str, left: PredicateInput, right: object | PredicateInput, **kwargs: Any) dict[str, int]
vibespatial.evaluate_binary_predicate(predicate: str, left: PredicateInput, right: object | PredicateInput, *, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.AUTO, precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO, null_behavior: NullBehavior | str = NullBehavior.PROPAGATE, **kwargs: Any) BinaryPredicateResult
vibespatial.evaluate_geopandas_binary_predicate(predicate: str, left: numpy.ndarray | vibespatial.geometry.owned.OwnedGeometryArray, right: object | numpy.ndarray | vibespatial.geometry.owned.OwnedGeometryArray, **kwargs: Any) numpy.ndarray | None
vibespatial.supports_binary_predicate(name: str) bool
vibespatial.EXECUTION_MODE_ENV_VAR = 'VIBESPATIAL_EXECUTION_MODE'
class vibespatial.ExecutionMode

Enum where members are also (and must be) strings

AUTO = 'auto'
GPU = 'gpu'
CPU = 'cpu'
class vibespatial.RuntimeSelection
requested: ExecutionMode
selected: ExecutionMode
reason: str
vibespatial.get_requested_mode() ExecutionMode

Return the session-wide requested execution mode.

Priority: explicit set_execution_mode() > env var > AUTO.

vibespatial.has_gpu_runtime() bool
vibespatial.select_runtime(requested: ExecutionMode | str = ExecutionMode.AUTO) RuntimeSelection
vibespatial.set_execution_mode(mode: ExecutionMode | str | None) None

Override the session execution mode. Pass None to clear.

Also invalidates the adaptive runtime snapshot cache so the planner re-evaluates on the next dispatch.

class vibespatial.AdaptivePlan
runtime_selection: vibespatial.runtime._runtime.RuntimeSelection
dispatch_decision: vibespatial.runtime.crossover.DispatchDecision
crossover_policy: vibespatial.runtime.crossover.CrossoverPolicy
device_profile: vibespatial.runtime.precision.DevicePrecisionProfile
precision_plan: vibespatial.runtime.precision.PrecisionPlan
variant: vibespatial.runtime.kernel_registry.KernelVariantSpec | None
chunk_rows: int
replan_after_chunk: bool
diagnostics: tuple[str, Ellipsis]
property requested: vibespatial.runtime._runtime.ExecutionMode
property selected: vibespatial.runtime._runtime.ExecutionMode
property reason: str
class vibespatial.DeviceSnapshot
backend: MonitoringBackend
gpu_available: bool
device_profile: vibespatial.runtime.precision.DevicePrecisionProfile
sm_utilization_pct: float | None = None
memory_utilization_pct: float | None = None
device_name: str = 'unknown'
reason: str = ''
property underutilized: bool
property under_memory_pressure: bool
class vibespatial.MonitoringBackend

Enum where members are also (and must be) strings

UNAVAILABLE = 'unavailable'
NVML = 'nvml'
class vibespatial.MonitoringSample
sm_utilization_pct: float
memory_utilization_pct: float
device_name: str = 'unknown'
class vibespatial.WorkloadProfile
row_count: int
geometry_families: tuple[str, Ellipsis] = ()
mixed_geometry: bool = False
current_residency: vibespatial.runtime.residency.Residency
coordinate_stats: vibespatial.runtime.precision.CoordinateStats | None = None
is_streaming: bool = False
chunk_index: int = 0
avg_vertices_per_geometry: float = 0.0
workload_shape: vibespatial.runtime.crossover.WorkloadShape | None = None
vibespatial.capture_device_snapshot(*, probe: MonitoringProbe | None = None, gpu_available: bool | None = None, device_profile: vibespatial.runtime.precision.DevicePrecisionProfile | None = None) DeviceSnapshot
vibespatial.get_cached_snapshot() DeviceSnapshot

Return a session-scoped DeviceSnapshot, creating it on first call.

vibespatial.invalidate_snapshot_cache() None

Clear the cached snapshot so the next call to get_cached_snapshot() re-probes.

vibespatial.plan_adaptive_execution(*, kernel_name: str, kernel_class: vibespatial.runtime.precision.KernelClass | str, workload: WorkloadProfile, requested_mode: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.AUTO, requested_precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO, device_snapshot: DeviceSnapshot | None = None, variants: tuple[vibespatial.runtime.kernel_registry.KernelVariantSpec, Ellipsis] | None = None) AdaptivePlan
vibespatial.plan_dispatch_selection(*, kernel_name: str, kernel_class: vibespatial.runtime.precision.KernelClass | str, row_count: int, requested_mode: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.AUTO, requested_precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO, precision_kernel_class: vibespatial.runtime.precision.KernelClass | str | None = None, geometry_families: tuple[str, Ellipsis] = (), mixed_geometry: bool = False, current_residency: vibespatial.runtime.residency.Residency = Residency.HOST, coordinate_stats: vibespatial.runtime.precision.CoordinateStats | None = None, is_streaming: bool = False, chunk_index: int = 0, gpu_available: bool | None = None, workload_shape: vibespatial.runtime.crossover.WorkloadShape | None = None) AdaptivePlan

Plan dispatch while preserving compatibility with RuntimeSelection-style access.

vibespatial.plan_kernel_dispatch(*, kernel_name: str, kernel_class: vibespatial.runtime.precision.KernelClass | str, row_count: int, requested_mode: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.AUTO, requested_precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO, geometry_families: tuple[str, Ellipsis] = (), mixed_geometry: bool = False, current_residency: vibespatial.runtime.residency.Residency = Residency.HOST, coordinate_stats: vibespatial.runtime.precision.CoordinateStats | None = None, is_streaming: bool = False, chunk_index: int = 0, gpu_available: bool | None = None, workload_shape: vibespatial.runtime.crossover.WorkloadShape | None = None) AdaptivePlan

Plan kernel dispatch with a cached device snapshot.

This is the recommended entry point for all GPU dispatch decisions. It gets (or creates) a session-scoped DeviceSnapshot, builds a WorkloadProfile, and calls plan_adaptive_execution().

vibespatial.DEFAULT_BROADCAST_CROSSOVER_POLICIES: dict[vibespatial.runtime.precision.KernelClass, int]
vibespatial.DEFAULT_CROSSOVER_POLICIES: dict[vibespatial.runtime.precision.KernelClass, int]
class vibespatial.CrossoverPolicy

Per-kernel crossover thresholds for AUTO dispatch.

auto_min_rows is the pairwise threshold (left and right have the same length). broadcast_min_rows is an optional lower threshold for broadcast workload shapes (BROADCAST_RIGHT / SCALAR_RIGHT) where the right-side geometry fits in L1 cache and is reused N times, making GPU profitable at much smaller N.

kernel_name: str
kernel_class: vibespatial.runtime.precision.KernelClass
auto_min_rows: int
reason: str
broadcast_min_rows: int | None = None
class vibespatial.DispatchDecision

Enum where members are also (and must be) strings

CPU = 'cpu'
GPU = 'gpu'
vibespatial.default_crossover_policy(kernel_name: str, kernel_class: vibespatial.runtime.precision.KernelClass | str) CrossoverPolicy
vibespatial.select_dispatch_for_rows(*, requested_mode: vibespatial.runtime._runtime.ExecutionMode | str, row_count: int, policy: CrossoverPolicy, gpu_available: bool, workload_shape: WorkloadShape | None = None) DispatchDecision

Select CPU or GPU execution based on row count and crossover policy.

When workload_shape is BROADCAST_RIGHT or SCALAR_RIGHT, the effective threshold is policy.broadcast_min_rows (or policy.auto_min_rows // 10 if the policy does not set a broadcast threshold). This reflects the fact that broadcast workloads have perfect right-side data locality and benefit from GPU execution at much smaller N than pairwise workloads.

class vibespatial.DispatchEvent
surface: str
operation: str
requested: vibespatial.runtime._runtime.ExecutionMode
selected: vibespatial.runtime._runtime.ExecutionMode
implementation: str
reason: str
detail: str = ''
to_dict() dict[str, Any]
vibespatial.clear_dispatch_events() None
vibespatial.get_dispatch_events(*, clear: bool = False) list[DispatchEvent]
vibespatial.record_dispatch_event(*, surface: str, operation: str, implementation: str, reason: str, detail: str = '', requested: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.AUTO, selected: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.CPU) DispatchEvent
vibespatial.TRACE_WARNINGS_ENV_VAR = 'VIBESPATIAL_TRACE_WARNINGS'
class vibespatial.ExecutionTraceContext
pipeline: str
steps: list[TraceStep] = []
transfers: list[TraceTransfer] = []
record_step(step: TraceStep) None
record_transfer(transfer: TraceTransfer) None
summary() dict[str, Any]
exception vibespatial.VibeTraceWarning

Base class for warnings generated by user code.

vibespatial.execution_trace(pipeline: str)
vibespatial.get_active_trace() ExecutionTraceContext | None
vibespatial.STRICT_NATIVE_ENV_VAR = 'VIBESPATIAL_STRICT_NATIVE'
class vibespatial.FallbackEvent
surface: str
requested: vibespatial.runtime._runtime.ExecutionMode
selected: vibespatial.runtime._runtime.ExecutionMode
reason: str
detail: str = ''
pipeline: str = ''
d2h_transfer: bool = False
to_dict() dict[str, Any]
exception vibespatial.StrictNativeFallbackError

Unspecified run-time error.

vibespatial.clear_fallback_events() None
vibespatial.get_fallback_events(*, clear: bool = False) list[FallbackEvent]
vibespatial.record_fallback_event(*, surface: str, reason: str, detail: str = '', requested: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.AUTO, selected: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.CPU, pipeline: str = '', d2h_transfer: bool = False) FallbackEvent
vibespatial.strict_native_mode_enabled() bool
class vibespatial.FusionPlan
stages: tuple[FusionStage, Ellipsis]
peak_memory_target_ratio: float
reason: str
class vibespatial.FusionStage
steps: tuple[PipelineStep, Ellipsis]
disposition: IntermediateDisposition
reason: str
class vibespatial.IntermediateDisposition

Enum where members are also (and must be) strings

EPHEMERAL = 'ephemeral'
PERSIST = 'persist'
BOUNDARY = 'boundary'
class vibespatial.PipelineStep
name: str
kind: StepKind
output_name: str
output_rows_follow_input: bool = True
reusable_output: bool = False
materializes_host_output: bool = False
requires_stable_row_order: bool = False
class vibespatial.StepKind

Enum where members are also (and must be) strings

GEOMETRY = 'geometry'
DERIVED = 'derived'
FILTER = 'filter'
ORDERING = 'ordering'
INDEX = 'index'
MATERIALIZATION = 'materialization'
RASTER = 'raster'
vibespatial.plan_fusion(steps: tuple[PipelineStep, Ellipsis] | list[PipelineStep]) FusionPlan
class vibespatial.MaterializationBoundary

Enum where members are also (and must be) strings

USER_EXPORT = 'user-export'
INTERNAL_HOST_CONVERSION = 'internal-host-conversion'
DEBUG = 'debug'
class vibespatial.MaterializationEvent
surface: str
boundary: MaterializationBoundary
reason: str
operation: str = ''
detail: str = ''
pipeline: str = ''
dataset: str = ''
stage: str = ''
stage_category: str = ''
d2h_transfer: bool = False
strict_disallowed: bool = False
to_dict() dict[str, Any]
exception vibespatial.StrictNativeMaterializationError

Unspecified run-time error.

vibespatial.clear_materialization_events() None
vibespatial.get_materialization_events(*, clear: bool = False) list[MaterializationEvent]
vibespatial.record_materialization_event(*, surface: str, boundary: MaterializationBoundary | str, reason: str, operation: str = '', detail: str = '', pipeline: str = '', dataset: str = '', stage: str = '', stage_category: str = '', d2h_transfer: bool = False, strict_disallowed: bool = False) MaterializationEvent
vibespatial.NULL_BOUNDS
class vibespatial.GeometryPresence

Enum where members are also (and must be) strings

NULL = 'null'
EMPTY = 'empty'
VALUE = 'value'
class vibespatial.GeometrySemantics
presence: GeometryPresence
geom_type: str | None = None
vibespatial.classify_geometry(value: Any) GeometrySemantics
vibespatial.is_null_like(value: Any) bool
vibespatial.measurement_result_for_geometry(value: Any, *, kind: str) float | tuple[float, float, float, float]
vibespatial.predicate_result_for_pair(left: Any, right: Any) bool | None
vibespatial.unary_result_for_missing_input(value: Any) None
vibespatial.DEFAULT_CONSUMER_PROFILE
vibespatial.DEFAULT_DATACENTER_PROFILE
class vibespatial.CompensationMode

Enum where members are also (and must be) strings

NONE = 'none'
CENTERED = 'centered'
KAHAN = 'kahan'
DOUBLE_SINGLE = 'double-single'
class vibespatial.CoordinateStats
max_abs_coord: float = 0.0
span: float = 0.0
property needs_centering: bool
class vibespatial.DevicePrecisionProfile
name: str
fp64_to_fp32_ratio: float
property favors_native_fp64: bool
class vibespatial.KernelClass

Enum where members are also (and must be) strings

COARSE = 'coarse'
METRIC = 'metric'
PREDICATE = 'predicate'
CONSTRUCTIVE = 'constructive'
class vibespatial.PrecisionMode

Enum where members are also (and must be) strings

AUTO = 'auto'
FP32 = 'fp32'
FP64 = 'fp64'
class vibespatial.PrecisionPlan
storage_precision: PrecisionMode
compute_precision: PrecisionMode
kernel_class: KernelClass
compensation: CompensationMode
refinement: RefinementMode
center_coordinates: bool
reason: str
class vibespatial.RefinementMode

Enum where members are also (and must be) strings

NONE = 'none'
SELECTIVE_FP64 = 'selective-fp64'
EXACT = 'exact'
vibespatial.normalize_precision_mode(value: PrecisionMode | str) PrecisionMode
vibespatial.select_precision_plan(*, runtime_selection: vibespatial.runtime._runtime.RuntimeSelection, kernel_class: KernelClass, requested: PrecisionMode | str = PrecisionMode.AUTO, coordinate_stats: CoordinateStats | None = None, device_profile: DevicePrecisionProfile | None = None) PrecisionPlan
class vibespatial.Residency

Enum where members are also (and must be) strings

HOST = 'host'
DEVICE = 'device'
class vibespatial.ResidencyPlan
current: Residency
target: Residency
trigger: TransferTrigger
transfer_required: bool
visible_to_user: bool
zero_copy_eligible: bool
reason: str
class vibespatial.TransferTrigger

Enum where members are also (and must be) strings

USER_MATERIALIZATION = 'user-materialization'
EXPLICIT_RUNTIME_REQUEST = 'explicit-runtime-request'
UNSUPPORTED_GPU_PATH = 'unsupported-gpu-path'
INTEROP_VIEW = 'interop-view'
vibespatial.select_residency_plan(*, current: Residency | str, target: Residency | str, trigger: TransferTrigger | str) ResidencyPlan
class vibespatial.PredicateFallback

Enum where members are also (and must be) strings

NONE = 'none'
SELECTIVE_FP64 = 'selective-fp64'
EXPANSION_ARITHMETIC = 'expansion-arithmetic'
RATIONAL_RECONSTRUCTION = 'rational-reconstruction'
class vibespatial.RobustnessGuarantee

Enum where members are also (and must be) strings

EXACT = 'exact'
BOUNDED_ERROR = 'bounded-error'
BEST_EFFORT = 'best-effort'
class vibespatial.RobustnessPlan
kernel_class: vibespatial.runtime.precision.KernelClass
guarantee: RobustnessGuarantee
predicate_fallback: PredicateFallback
topology_policy: TopologyPolicy
handles_nulls: bool
handles_empties: bool
reason: str
class vibespatial.TopologyPolicy

Enum where members are also (and must be) strings

PRESERVE = 'preserve'
SNAP_GRID = 'snap-grid'
BEST_EFFORT = 'best-effort'
vibespatial.select_robustness_plan(*, kernel_class: vibespatial.runtime.precision.KernelClass, precision_plan: vibespatial.runtime.precision.PrecisionPlan, null_state: vibespatial.runtime.nulls.GeometryPresence | None = None, empty_state: vibespatial.runtime.nulls.GeometryPresence | None = None) RobustnessPlan
class vibespatial.BoundsPairBenchmark
dataset: str
rows: int
tile_size: int
elapsed_seconds: float
pairs_examined: int
candidate_pairs: int
class vibespatial.CandidatePairs

MBR candidate pair result with optional device-resident arrays.

When produced by the GPU path, _device_left_indices and _device_right_indices hold CuPy device arrays. The public left_indices and right_indices properties lazily materialise host (NumPy) arrays on first access, following the same pattern as FlatSpatialIndex.

left_bounds: numpy.ndarray
right_bounds: numpy.ndarray
pairs_examined: int
tile_size: int
same_input: bool
property left_indices: numpy.ndarray

Lazily materialise host left_indices from device (ADR-0005).

property right_indices: numpy.ndarray

Lazily materialise host right_indices from device (ADR-0005).

property device_left_indices

CuPy device array of left indices, or None if CPU-produced.

property device_right_indices

CuPy device array of right indices, or None if CPU-produced.

property count: int
class vibespatial.FlatSpatialIndex
geometry_array: vibespatial.geometry.owned.OwnedGeometryArray
total_bounds: tuple[float, float, float, float]
regular_grid: RegularGridRectIndex | None = None
device_morton_keys: object = None
device_order: object = None
device_bounds: object = None
property bounds: numpy.ndarray

Lazily materialise host bounds for CPU/public compatibility paths.

property order: numpy.ndarray

Lazily materialise host order array from device (ADR-0005).

property morton_keys: numpy.ndarray

Lazily materialise host morton_keys array from device (ADR-0005).

property size: int
query_bounds(bounds: tuple[float, float, float, float]) numpy.ndarray
query(other: vibespatial.geometry.owned.OwnedGeometryArray, *, tile_size: int = COARSE_BOUNDS_TILE_SIZE) CandidatePairs
class vibespatial.SegmentCandidatePairs

Segment candidate pairs with lazy device-to-host materialization.

When produced by the GPU path, _device_* fields hold CuPy device arrays and _host_* fields are None. The public properties lazily call cp.asnumpy() on first host access, following the CandidatePairs pattern (ADR-0005).

pairs_examined: int
property left_rows: numpy.ndarray

Lazily materialise host left_rows from device (ADR-0005).

property left_segments: numpy.ndarray

Lazily materialise host left_segments from device (ADR-0005).

property right_rows: numpy.ndarray

Lazily materialise host right_rows from device (ADR-0005).

property right_segments: numpy.ndarray

Lazily materialise host right_segments from device (ADR-0005).

property device_left_rows

CuPy device array of left row indices, or None if CPU-produced.

property device_left_segments

CuPy device array of left segment indices, or None if CPU-produced.

property device_right_rows

CuPy device array of right row indices, or None if CPU-produced.

property device_right_segments

CuPy device array of right segment indices, or None if CPU-produced.

property count: int
class vibespatial.SegmentFilterBenchmark
rows_left: int
rows_right: int
naive_segment_pairs: int
filtered_segment_pairs: int
elapsed_seconds: float
class vibespatial.SegmentMBRTable

Segment MBR table with optional device-resident arrays.

When produced by the GPU path, arrays are CuPy device arrays and residency is Residency.DEVICE. The public properties row_indices, segment_indices, and bounds return the underlying arrays as-is (device or host). Use to_host() to get a copy with NumPy arrays on the host side.

row_indices: object
segment_indices: object
bounds: object
residency: vibespatial.runtime.residency.Residency
property count: int
to_host() SegmentMBRTable

Return a host-resident copy (NumPy arrays).

If already host-resident, returns self.

vibespatial.benchmark_bounds_pairs(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray, *, dataset: str, tile_size: int = COARSE_BOUNDS_TILE_SIZE) BoundsPairBenchmark
vibespatial.benchmark_segment_filter(left: vibespatial.geometry.owned.OwnedGeometryArray, right: vibespatial.geometry.owned.OwnedGeometryArray, *, tile_size: int = SEGMENT_TILE_SIZE) SegmentFilterBenchmark
vibespatial.build_flat_spatial_index(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray, *, runtime_selection: vibespatial.runtime.RuntimeSelection | None = None) FlatSpatialIndex
vibespatial.extract_segment_mbrs(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray) SegmentMBRTable

Extract per-segment MBRs from all line/polygon geometries.

Dispatches to GPU when available, falling back to CPU otherwise. The GPU path returns device-resident CuPy arrays (no D->H transfer).

vibespatial.generate_bounds_pairs(left: vibespatial.geometry.owned.OwnedGeometryArray, right: vibespatial.geometry.owned.OwnedGeometryArray | None = None, *, tile_size: int = COARSE_BOUNDS_TILE_SIZE, include_self: bool = False) CandidatePairs
vibespatial.generate_segment_mbr_pairs(left: vibespatial.geometry.owned.OwnedGeometryArray, right: vibespatial.geometry.owned.OwnedGeometryArray, *, tile_size: int = SEGMENT_TILE_SIZE) SegmentCandidatePairs

Generate candidate segment pairs by MBR overlap filtering.

Dispatches to GPU when available. The GPU path uses the existing sweep-sort overlap kernel (_generate_bounds_pairs_gpu) on segment bounds, returning device-resident CuPy arrays (no eager D->H transfer).

class vibespatial.SegmentIntersectionBenchmark
rows_left: int
rows_right: int
candidate_pairs: int
disjoint_pairs: int
proper_pairs: int
touch_pairs: int
overlap_pairs: int
ambiguous_pairs: int
elapsed_seconds: float
class vibespatial.SegmentIntersectionCandidates
left_rows: numpy.ndarray
left_segments: numpy.ndarray
left_lookup: numpy.ndarray
right_rows: numpy.ndarray
right_segments: numpy.ndarray
right_lookup: numpy.ndarray
pairs_examined: int
tile_size: int
property count: int
class vibespatial.SegmentIntersectionKind

Enum where members are also (and must be) ints

DISJOINT = 0
PROPER = 1
TOUCH = 2
OVERLAP = 3
class vibespatial.SegmentIntersectionResult

Segment intersection results with lazy host materialization.

When produced by the GPU pipeline, all 14 result arrays live in device_state and host numpy arrays are lazily copied on first property access. GPU-only consumers (e.g. build_gpu_split_events) that read only device_state, candidate_pairs, count, runtime_selection, precision_plan, and robustness_plan never trigger device-to-host copies.

candidate_pairs: int
runtime_selection: vibespatial.runtime.RuntimeSelection
precision_plan: vibespatial.runtime.precision.PrecisionPlan
robustness_plan: vibespatial.runtime.robustness.RobustnessPlan
device_state: SegmentIntersectionDeviceState | None = None
property left_rows: numpy.ndarray
property left_segments: numpy.ndarray
property left_lookup: numpy.ndarray
property right_rows: numpy.ndarray
property right_segments: numpy.ndarray
property right_lookup: numpy.ndarray
property kinds: numpy.ndarray
property point_x: numpy.ndarray
property point_y: numpy.ndarray
property overlap_x0: numpy.ndarray
property overlap_y0: numpy.ndarray
property overlap_x1: numpy.ndarray
property overlap_y1: numpy.ndarray
property ambiguous_rows: numpy.ndarray
property count: int
kind_names() list[str]
class vibespatial.SegmentLocalEventSummary

Per-row exact local-event summary derived from segment intersections.

runtime_selection: vibespatial.runtime.RuntimeSelection
precision_plan: vibespatial.runtime.precision.PrecisionPlan
robustness_plan: vibespatial.runtime.robustness.RobustnessPlan
candidate_pairs: int
point_intersection_count: int
parallel_or_colinear_candidate_count: int
row_point_intersection_counts: numpy.ndarray
exact_event_counts: numpy.ndarray
exact_interval_upper_bounds: numpy.ndarray
property max_exact_events: int
class vibespatial.SegmentTable
row_indices: numpy.ndarray
part_indices: numpy.ndarray
ring_indices: numpy.ndarray
segment_indices: numpy.ndarray
x0: numpy.ndarray
y0: numpy.ndarray
x1: numpy.ndarray
y1: numpy.ndarray
bounds: numpy.ndarray
property count: int
vibespatial.benchmark_segment_intersections(left: vibespatial.geometry.owned.OwnedGeometryArray, right: vibespatial.geometry.owned.OwnedGeometryArray, *, tile_size: int = SEGMENT_TILE_SIZE, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.AUTO) SegmentIntersectionBenchmark
vibespatial.classify_segment_intersections(left: vibespatial.geometry.owned.OwnedGeometryArray, right: vibespatial.geometry.owned.OwnedGeometryArray, *, candidate_pairs: SegmentIntersectionCandidates | None = None, tile_size: int = SEGMENT_TILE_SIZE, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.AUTO, precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO, _cached_left_device_segments: DeviceSegmentTable | None = None, _cached_right_device_segments: DeviceSegmentTable | None = None, _require_same_row: bool = False, _use_same_row_fast_path: bool = True, _collect_ambiguous_rows: bool = True) SegmentIntersectionResult

Classify all segment-segment intersections between two geometry arrays.

Parameters

left, rightOwnedGeometryArray

Input geometry arrays (linestring, polygon, or multi-variants).

candidate_pairsSegmentIntersectionCandidates, optional

Pre-computed candidate pairs. If None, candidates are generated internally (GPU-native O(n log n) when GPU mode, tiled CPU otherwise).

tile_sizeint

Tile size for CPU candidate generation (ignored in GPU mode).

dispatch_modeExecutionMode

Force GPU, CPU, or AUTO dispatch.

precisionPrecisionMode

Force fp32, fp64, or AUTO precision.

_cached_left_device_segmentsDeviceSegmentTable, optional

Pre-extracted left-side device segments for reuse.

_cached_right_device_segmentsDeviceSegmentTable, optional

Pre-extracted right-side device segments for reuse (lyy.15).

Returns

SegmentIntersectionResult

Classification of all candidate segment pairs.

vibespatial.extract_segments(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray) SegmentTable

Extract segments from geometry array on CPU (legacy path).

vibespatial.generate_segment_candidates(left: vibespatial.geometry.owned.OwnedGeometryArray, right: vibespatial.geometry.owned.OwnedGeometryArray, *, tile_size: int = SEGMENT_TILE_SIZE) SegmentIntersectionCandidates
vibespatial.summarize_exact_local_events(left: vibespatial.geometry.owned.OwnedGeometryArray, right: vibespatial.geometry.owned.OwnedGeometryArray, *, candidate_pairs: SegmentIntersectionCandidates | None = None, tile_size: int = SEGMENT_TILE_SIZE, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.AUTO, precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO, _cached_right_device_segments: DeviceSegmentTable | None = None, _require_same_row: bool = False) SegmentLocalEventSummary

Summarize per-row exact local-event counts for overlay-style workloads.

This is a reusable bridge between segment intersection classification and later topology stages. It combines segment endpoints with exact point-intersection outputs to produce stable row-local exact-event counts and interval upper bounds without teaching that logic separately in each overlay consumer.