vibespatial¶
Submodules¶
Attributes¶
Exceptions¶
Base class for warnings generated by user code. |
|
Unspecified run-time error. |
Classes¶
A GeoDataFrame object is a pandas.DataFrame that has one or more columns |
|
A Series object designed to store shapely geometry objects. |
|
Result of a rectangle clip operation. |
|
Result of GPU make_valid repair. |
|
Enum where members are also (and must be) strings |
|
Result of a buffer kernel invocation. |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Columnar geometry storage with optional device-resident metadata. |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
Enum where members are also (and must be) strings |
|
MBR candidate pair result with optional device-resident arrays. |
|
Segment candidate pairs with lazy device-to-host materialization. |
|
Segment MBR table with optional device-resident arrays. |
|
Enum where members are also (and must be) ints |
|
Segment intersection results with lazy host materialization. |
|
Functions¶
|
List layers available in a file. |
|
Generate GeometryArray of shapely Point geometries from x, y(, z) coordinates. |
|
Load a Feather object from the file path, returning a GeoDataFrame. |
|
Read a vector file into a GeoDataFrame. |
|
Read a GeoParquet file into a GeoDataFrame. |
|
Spatial join of two GeoDataFrames based on the distance between their geometries. |
|
|
|
|
|
GPU-resident batch repair of invalid polygon geometries (Phase 16). |
|
|
|
Run make_valid and return the full MakeValidResult. |
|
|
|
Validate and repair geometries using compact-invalid-row pattern (ADR-0019). |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Read a GeoParquet file into a GeoDataFrame. |
|
|
|
|
|
|
|
|
|
Read a vector file into a GeoDataFrame. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Return the session-wide requested execution mode. |
|
|
|
|
|
Override the session execution mode. Pass None to clear. |
|
|
|
Return a session-scoped DeviceSnapshot, creating it on first call. |
|
Clear the cached snapshot so the next call to get_cached_snapshot() re-probes. |
|
|
Thin wrapper: plan dispatch and return just the RuntimeSelection. |
|
|
Plan kernel dispatch with a cached device snapshot. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Extract per-segment MBRs from all line/polygon geometries. |
|
|
|
Generate candidate segment pairs by MBR overlap filtering. |
|
Classify all segment-segment intersections between two geometry arrays. |
|
Extract segments from geometry array on CPU (legacy path). |
Package Contents¶
- class vibespatial.GeoDataFrame(data=None, *args, geometry: Any | None = None, crs: Any | None = None, **kwargs)¶
A GeoDataFrame object is a pandas.DataFrame that has one or more columns containing geometry.
In addition to the standard DataFrame constructor arguments, GeoDataFrame also accepts the following keyword arguments:
Parameters¶
- crsvalue (optional)
Coordinate Reference System of the geometry objects. Can be anything accepted by
pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.- geometrystr or array-like (optional)
Value to use as the active geometry column. If str, treated as column name to use. If array-like, it will be added as new column named ‘geometry’ on the GeoDataFrame and set as the active geometry column.
Note that if
geometryis a (Geo)Series with a name, the name will not be used, a column named “geometry” will still be added. To preserve the name, you can userename_geometry()to update the geometry column name.
Examples¶
Constructing GeoDataFrame from a dictionary.
>>> from shapely.geometry import Point >>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]} >>> gdf = geopandas.GeoDataFrame(d, crs="EPSG:4326") >>> gdf col1 geometry 0 name1 POINT (1 2) 1 name2 POINT (2 1)
Notice that the inferred dtype of ‘geometry’ columns is geometry.
>>> gdf.dtypes col1 str geometry geometry dtype: object
Constructing GeoDataFrame from a pandas DataFrame with a column of WKT geometries:
>>> import pandas as pd >>> d = {'col1': ['name1', 'name2'], 'wkt': ['POINT (1 2)', 'POINT (2 1)']} >>> df = pd.DataFrame(d) >>> gs = geopandas.GeoSeries.from_wkt(df['wkt']) >>> gdf = geopandas.GeoDataFrame(df, geometry=gs, crs="EPSG:4326") >>> gdf col1 wkt geometry 0 name1 POINT (1 2) POINT (1 2) 1 name2 POINT (2 1) POINT (2 1)
See Also¶
GeoSeries : Series object designed to store shapely geometry objects
- geometry¶
- set_geometry(col, drop: bool | None = ..., inplace: Literal[True] = ..., crs: Any | None = ...) None¶
- set_geometry(col, drop: bool | None = ..., inplace: Literal[False] = ..., crs: Any | None = ...) GeoDataFrame
Set the GeoDataFrame geometry using either an existing column or the specified input. By default yields a new object.
The original geometry column is replaced with the input.
Parameters¶
- colcolumn label or array-like
An existing column name or values to set as the new geometry column. If values (array-like, (Geo)Series) are passed, then if they are named (Series) the new geometry column will have the corresponding name, otherwise the existing geometry column will be replaced. If there is no existing geometry column, the new geometry column will use the default name “geometry”.
- dropboolean, default False
When specifying a named Series or an existing column name for col, controls if the previous geometry column should be dropped from the result. The default of False keeps both the old and new geometry column.
Deprecated since version 1.0.0.
- inplaceboolean, default False
Modify the GeoDataFrame in place (do not create a new object)
- crspyproj.CRS, optional
Coordinate system to use. The value can be anything accepted by
pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string. If passed, overrides both DataFrame and col’s crs. Otherwise, tries to get crs from passed col values or DataFrame.
Examples¶
>>> from shapely.geometry import Point >>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]} >>> gdf = geopandas.GeoDataFrame(d, crs="EPSG:4326") >>> gdf col1 geometry 0 name1 POINT (1 2) 1 name2 POINT (2 1)
Passing an array:
>>> df1 = gdf.set_geometry([Point(0,0), Point(1,1)]) >>> df1 col1 geometry 0 name1 POINT (0 0) 1 name2 POINT (1 1)
Using existing column:
>>> gdf["buffered"] = gdf.buffer(2) >>> df2 = gdf.set_geometry("buffered") >>> df2.geometry 0 POLYGON ((3 2, 2.99037 1.80397, 2.96157 1.6098... 1 POLYGON ((4 1, 3.99037 0.80397, 3.96157 0.6098... Name: buffered, dtype: geometry
Returns¶
GeoDataFrame
See Also¶
GeoDataFrame.rename_geometry : rename an active geometry column
- rename_geometry(col: str, inplace: Literal[True] = ...) None¶
- rename_geometry(col: str, inplace: Literal[False] = ...) GeoDataFrame
Rename the GeoDataFrame geometry column to the specified name.
By default yields a new object.
The original geometry column is replaced with the input.
Parameters¶
col : new geometry column label inplace : boolean, default False
Modify the GeoDataFrame in place (do not create a new object)
Examples¶
>>> from shapely.geometry import Point >>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]} >>> df = geopandas.GeoDataFrame(d, crs="EPSG:4326") >>> df1 = df.rename_geometry('geom1') >>> df1.geometry.name 'geom1' >>> df.rename_geometry('geom1', inplace=True) >>> df.geometry.name 'geom1'
See Also¶
GeoDataFrame.set_geometry : set the active geometry
- property active_geometry_name: Any¶
Return the name of the active geometry column.
Returns a name if a GeoDataFrame has an active geometry column set, otherwise returns None. The return type is usually a string, but may be an integer, tuple or other hashable, depending on the contents of the dataframe columns.
You can also access the active geometry column using the
.geometryproperty. You can set a GeoSeries to be an active geometry using theset_geometry()method.Returns¶
- str or other index label supported by pandas
name of an active geometry column or None
See Also¶
GeoDataFrame.set_geometry : set the active geometry
- property crs: pyproj.CRS¶
The Coordinate Reference System (CRS) represented as a
pyproj.CRSobject.Returns¶
pyproj.CRS| NoneCRS assigned to an active geometry column
Examples¶
>>> gdf.crs <Geographic 2D CRS: EPSG:4326> Name: WGS 84 Axis Info [ellipsoidal]: - Lat[north]: Geodetic latitude (degree) - Lon[east]: Geodetic longitude (degree) Area of Use: - name: World - bounds: (-180.0, -90.0, 180.0, 90.0) Datum: World Geodetic System 1984 - Ellipsoid: WGS 84 - Prime Meridian: Greenwich
See Also¶
GeoDataFrame.set_crs : assign CRS GeoDataFrame.to_crs : re-project to another CRS
- classmethod from_dict(data: dict, geometry=None, crs: Any | None = None, **kwargs) GeoDataFrame¶
Construct GeoDataFrame from dict of array-like or dicts by overriding DataFrame.from_dict method with geometry and crs.
Parameters¶
- datadict
Of the form {field : array-like} or {field : dict}.
- geometrystr or array (optional)
If str, column to use as geometry. If array, will be set as ‘geometry’ column on GeoDataFrame.
- crsstr or dict (optional)
Coordinate reference system to set on the resulting frame.
- kwargskey-word arguments
These arguments are passed to DataFrame.from_dict
Returns¶
GeoDataFrame
- classmethod from_file(filename: os.PathLike | IO, **kwargs) GeoDataFrame¶
Alternate constructor to create a
GeoDataFramefrom a file.It is recommended to use
geopandas.read_file()instead.Can load a
GeoDataFramefrom a file in any format recognized by pyogrio. See http://pyogrio.readthedocs.io/ for details.Parameters¶
- filenamestr
File path or file handle to read from. Depending on which kwargs are included, the content of filename may vary. See
pyogrio.read_dataframe()for usage details.- kwargskey-word arguments
These arguments are passed to
pyogrio.read_dataframe(), and can be used to access multi-layer data, data stored within archives (zip files), etc.
Examples¶
>>> import geodatasets >>> path = geodatasets.get_path('nybb') >>> gdf = geopandas.GeoDataFrame.from_file(path) >>> gdf BoroCode BoroName Shape_Leng Shape_Area geometry 0 5 Staten Island 330470.010332 1.623820e+09 MULTIPOLYGON (((970217.022 145643.332, 970227.... 1 4 Queens 896344.047763 3.045213e+09 MULTIPOLYGON (((1029606.077 156073.814, 102957... 2 3 Brooklyn 741080.523166 1.937479e+09 MULTIPOLYGON (((1021176.479 151374.797, 102100... 3 1 Manhattan 359299.096471 6.364715e+08 MULTIPOLYGON (((981219.056 188655.316, 980940.... 4 2 Bronx 464392.991824 1.186925e+09 MULTIPOLYGON (((1012821.806 229228.265, 101278...
The recommended method of reading files is
geopandas.read_file():>>> gdf = geopandas.read_file(path)
See Also¶
read_file : read file to GeoDataFrame GeoDataFrame.to_file : write GeoDataFrame to file
- classmethod from_features(features, crs: Any | None = None, columns: collections.abc.Iterable[str] | None = None) GeoDataFrame¶
Alternate constructor to create GeoDataFrame from an iterable of features or a feature collection.
Parameters¶
- features
Iterable of features, where each element must be a feature dictionary or implement the __geo_interface__.
Feature collection, where the ‘features’ key contains an iterable of features.
Object holding a feature collection that implements the
__geo_interface__.
- crsstr or dict (optional)
Coordinate reference system to set on the resulting frame.
- columnslist of column names, optional
Optionally specify the column names to include in the output frame. This does not overwrite the property names of the input, but can ensure a consistent output format.
Returns¶
GeoDataFrame
Notes¶
For more information about the
__geo_interface__, see https://gist.github.com/sgillies/2217756Examples¶
>>> feature_coll = { ... "type": "FeatureCollection", ... "features": [ ... { ... "id": "0", ... "type": "Feature", ... "properties": {"col1": "name1"}, ... "geometry": {"type": "Point", "coordinates": (1.0, 2.0)}, ... "bbox": (1.0, 2.0, 1.0, 2.0), ... }, ... { ... "id": "1", ... "type": "Feature", ... "properties": {"col1": "name2"}, ... "geometry": {"type": "Point", "coordinates": (2.0, 1.0)}, ... "bbox": (2.0, 1.0, 2.0, 1.0), ... }, ... ], ... "bbox": (1.0, 1.0, 2.0, 2.0), ... } >>> df = geopandas.GeoDataFrame.from_features(feature_coll) >>> df geometry col1 0 POINT (1 2) name1 1 POINT (2 1) name2
- classmethod from_postgis(sql: str | sqlalchemy.text, con, geom_col: str = 'geom', crs: Any | None = None, index_col: str | list[str] | None = None, coerce_float: bool = True, parse_dates: list | dict | None = None, params: list | tuple | dict | None = None, chunksize: int | None = None) GeoDataFrame¶
Alternate constructor to create a
GeoDataFramefrom a sql query containing a geometry column in WKB representation.Parameters¶
sql : string con : sqlalchemy.engine.Connection or sqlalchemy.engine.Engine geom_col : string, default ‘geom’
column name to convert to shapely geometries
- crsoptional
Coordinate reference system to use for the returned GeoDataFrame
- index_colstring or list of strings, optional, default: None
Column(s) to set as index(MultiIndex)
- coerce_floatboolean, default True
Attempt to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets
- parse_dateslist or dict, default None
List of column names to parse as dates.
Dict of
{column_name: format string}where format string is strftime compatible in case of parsing string times, or is one of (D, s, ns, ms, us) in case of parsing integer timestamps.Dict of
{column_name: arg dict}, where the arg dict corresponds to the keyword arguments ofpandas.to_datetime(). Especially useful with databases without native Datetime support, such as SQLite.
- paramslist, tuple or dict, optional, default None
List of parameters to pass to execute method.
- chunksizeint, default None
If specified, return an iterator where chunksize is the number of rows to include in each chunk.
Examples¶
PostGIS
>>> from sqlalchemy import create_engine >>> db_connection_url = "postgresql://myusername:mypassword@myhost:5432/mydb" >>> con = create_engine(db_connection_url) >>> sql = "SELECT geom, highway FROM roads" >>> df = geopandas.GeoDataFrame.from_postgis(sql, con)
SpatiaLite
>>> sql = "SELECT ST_Binary(geom) AS geom, highway FROM roads" >>> df = geopandas.GeoDataFrame.from_postgis(sql, con)
The recommended method of reading from PostGIS is
geopandas.read_postgis():>>> df = geopandas.read_postgis(sql, con)
See Also¶
geopandas.read_postgis : read PostGIS database to GeoDataFrame
- classmethod from_arrow(table, geometry: str | None = None, to_pandas_kwargs: dict | None = None)¶
Construct a GeoDataFrame from an Arrow table object based on GeoArrow extension types.
See https://geoarrow.org/ for details on the GeoArrow specification.
This functions accepts any tabular Arrow object implementing the Arrow PyCapsule Protocol (i.e. having an
__arrow_c_array__or__arrow_c_stream__method).Added in version 1.0.
Parameters¶
- tablepyarrow.Table or Arrow-compatible table
Any tabular object implementing the Arrow PyCapsule Protocol (i.e. has an
__arrow_c_array__or__arrow_c_stream__method). This table should have at least one column with a geoarrow geometry type.- geometrystr, default None
The name of the geometry column to set as the active geometry column. If None, the first geometry column found will be used.
- to_pandas_kwargsdict, optional
Arguments passed to the pa.Table.to_pandas method for non-geometry columns. This can be used to control the behavior of the conversion of the non-geometry columns to a pandas DataFrame. For example, you can use this to control the dtype conversion of the columns. By default, the to_pandas method is called with no additional arguments.
Returns¶
GeoDataFrame
See Also¶
GeoDataFrame.to_arrow GeoSeries.from_arrow
Examples¶
>>> import geoarrow.pyarrow as ga >>> import pyarrow as pa >>> table = pa.Table.from_arrays([ ... ga.as_geoarrow( ... [None, "POLYGON ((0 0, 1 1, 0 1, 0 0))", "LINESTRING (0 0, -1 1, 0 -1)"] ... ), ... pa.array([1, 2, 3]), ... pa.array(["a", "b", "c"]), ... ], names=["geometry", "id", "value"]) >>> gdf = geopandas.GeoDataFrame.from_arrow(table) >>> gdf geometry id value 0 None 1 a 1 POLYGON ((0 0, 1 1, 0 1, 0 0)) 2 b 2 LINESTRING (0 0, -1 1, 0 -1) 3 c
- to_json(na: Literal['null', 'drop', 'keep'] = 'null', show_bbox: bool = False, drop_id: bool = False, to_wgs84: bool = False, **kwargs) str¶
Return a GeoJSON representation of the
GeoDataFrameas a string.Parameters¶
- na{‘null’, ‘drop’, ‘keep’}, default ‘null’
Indicates how to output missing (NaN) values in the GeoDataFrame. See below.
- show_bboxbool, optional, default: False
Include bbox (bounds) in the geojson
- drop_idbool, default: False
Whether to retain the index of the GeoDataFrame as the id property in the generated GeoJSON. Default is False, but may want True if the index is just arbitrary row numbers.
- to_wgs84: bool, optional, default: False
If the CRS is set on the active geometry column it is exported as WGS84 (EPSG:4326) to meet the 2016 GeoJSON specification. Set to True to force re-projection and set to False to ignore CRS. False by default.
Notes¶
The remaining kwargs are passed to json.dumps().
Missing (NaN) values in the GeoDataFrame can be represented as follows:
null: output the missing entries as JSON null.drop: remove the property from the feature. This applies to each feature individually so that features may have different properties.keep: output the missing entries as NaN.
If the GeoDataFrame has a defined CRS, its definition will be included in the output unless it is equal to WGS84 (default GeoJSON CRS) or not possible to represent in the URN OGC format, or unless
to_wgs84=Trueis specified.Examples¶
>>> from shapely.geometry import Point >>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]} >>> gdf = geopandas.GeoDataFrame(d, crs="EPSG:3857") >>> gdf col1 geometry 0 name1 POINT (1 2) 1 name2 POINT (2 1)
>>> gdf.to_json() '{"type": "FeatureCollection", "features": [{"id": "0", "type": "Feature", "properties": {"col1": "name1"}, "geometry": {"type": "Point", "coordinates": [1.0, 2.0]}}, {"id": "1", "type": "Feature", "properties": {"col1": "name2"}, "geometry": {"type": "Point", "coordinates": [2.0, 1.0]}}], "crs": {"type": "name", "properties": {"name": "urn:ogc:def:crs:EPSG::3857"}}}'
Alternatively, you can write GeoJSON to file:
>>> gdf.to_file(path, driver="GeoJSON")
See Also¶
GeoDataFrame.to_file : write GeoDataFrame to file
- iterfeatures(na: str = 'null', show_bbox: bool = False, drop_id: bool = False) Generator[dict]¶
Return an iterator that yields feature dictionaries that comply with __geo_interface__.
Parameters¶
- nastr, optional
Options are {‘null’, ‘drop’, ‘keep’}, default ‘null’. Indicates how to output missing (NaN) values in the GeoDataFrame
null: output the missing entries as JSON null
drop: remove the property from the feature. This applies to each feature individually so that features may have different properties
keep: output the missing entries as NaN
- show_bboxbool, optional
Include bbox (bounds) in the geojson. Default False.
- drop_idbool, default: False
Whether to retain the index of the GeoDataFrame as the id property in the generated GeoJSON. Default is False, but may want True if the index is just arbitrary row numbers.
Examples¶
>>> from shapely.geometry import Point >>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]} >>> gdf = geopandas.GeoDataFrame(d, crs="EPSG:4326") >>> gdf col1 geometry 0 name1 POINT (1 2) 1 name2 POINT (2 1)
>>> feature = next(gdf.iterfeatures()) >>> feature {'id': '0', 'type': 'Feature', 'properties': {'col1': 'name1'}, 'geometry': {'type': 'Point', 'coordinates': (1.0, 2.0)}}
- to_geo_dict(na: str | None = 'null', show_bbox: bool = False, drop_id: bool = False) dict¶
Return a python feature collection representation of the GeoDataFrame as a dictionary with a list of features based on the
__geo_interface__GeoJSON-like specification.Parameters¶
- nastr, optional
Options are {‘null’, ‘drop’, ‘keep’}, default ‘null’. Indicates how to output missing (NaN) values in the GeoDataFrame
null: output the missing entries as JSON null
drop: remove the property from the feature. This applies to each feature individually so that features may have different properties
keep: output the missing entries as NaN
- show_bboxbool, optional
Include bbox (bounds) in the geojson. Default False.
- drop_idbool, default: False
Whether to retain the index of the GeoDataFrame as the id property in the generated dictionary. Default is False, but may want True if the index is just arbitrary row numbers.
Examples¶
>>> from shapely.geometry import Point >>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]} >>> gdf = geopandas.GeoDataFrame(d) >>> gdf col1 geometry 0 name1 POINT (1 2) 1 name2 POINT (2 1)
>>> gdf.to_geo_dict() {'type': 'FeatureCollection', 'features': [{'id': '0', 'type': 'Feature', 'properties': {'col1': 'name1'}, 'geometry': {'type': 'Point', 'coordinates': (1.0, 2.0)}}, {'id': '1', 'type': 'Feature', 'properties': {'col1': 'name2'}, 'geometry': {'type': 'Point', 'coordinates': (2.0, 1.0)}}]}
See Also¶
GeoDataFrame.to_json : return a GeoDataFrame as a GeoJSON string
- to_wkb(hex: bool = False, **kwargs) pandas.DataFrame¶
Encode all geometry columns in the GeoDataFrame to WKB.
Parameters¶
- hexbool
If true, export the WKB as a hexadecimal string. The default is to return a binary bytes object.
- kwargs
Additional keyword args will be passed to
shapely.to_wkb().
Returns¶
- DataFrame
geometry columns are encoded to WKB
- to_wkt(**kwargs) pandas.DataFrame¶
Encode all geometry columns in the GeoDataFrame to WKT.
Parameters¶
- kwargs
Keyword args will be passed to
shapely.to_wkt().
Returns¶
- DataFrame
geometry columns are encoded to WKT
- to_arrow(*, index: bool | None = None, geometry_encoding: vibespatial.api.io.arrow.PARQUET_GEOMETRY_ENCODINGS = 'WKB', interleaved: bool = True, include_z: bool | None = None)¶
Encode a GeoDataFrame to GeoArrow format.
See https://geoarrow.org/ for details on the GeoArrow specification.
This function returns a generic Arrow data object implementing the Arrow PyCapsule Protocol (i.e. having an
__arrow_c_stream__method). This object can then be consumed by your Arrow implementation of choice that supports this protocol.Added in version 1.0.
Parameters¶
- indexbool, default None
If
True, always include the dataframe’s index(es) as columns in the file output. IfFalse, the index(es) will not be written to the file. IfNone, the index(ex) will be included as columns in the file output except RangeIndex which is stored as metadata only.- geometry_encoding{‘WKB’, ‘geoarrow’ }, default ‘WKB’
The GeoArrow encoding to use for the data conversion.
- interleavedbool, default True
Only relevant for ‘geoarrow’ encoding. If True, the geometries’ coordinates are interleaved in a single fixed size list array. If False, the coordinates are stored as separate arrays in a struct type.
- include_zbool, default None
Only relevant for ‘geoarrow’ encoding (for WKB, the dimensionality of the individual geometries is preserved). If False, return 2D geometries. If True, include the third dimension in the output (if a geometry has no third dimension, the z-coordinates will be NaN). By default, will infer the dimensionality from the input geometries. Note that this inference can be unreliable with empty geometries (for a guaranteed result, it is recommended to specify the keyword).
Returns¶
- ArrowTable
A generic Arrow table object with geometry columns encoded to GeoArrow.
Examples¶
>>> from shapely.geometry import Point >>> data = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]} >>> gdf = geopandas.GeoDataFrame(data) >>> gdf col1 geometry 0 name1 POINT (1 2) 1 name2 POINT (2 1)
>>> arrow_table = gdf.to_arrow() >>> arrow_table <geopandas.io._geoarrow.ArrowTable object at ...>
The returned data object needs to be consumed by a library implementing the Arrow PyCapsule Protocol. For example, wrapping the data as a pyarrow.Table (requires pyarrow >= 14.0):
>>> import pyarrow as pa >>> table = pa.table(arrow_table) >>> table pyarrow.Table col1: large_string geometry: extension<geoarrow.wkb<WkbType>> ---- col1: [["name1","name2"]] geometry: [[0101000000000000000000F03F0000000000000040,01010000000000000000000040000000000000F03F]]
- to_parquet(path: os.PathLike | IO, index: bool | None = None, compression: str = 'snappy', geometry_encoding: vibespatial.api.io.arrow.PARQUET_GEOMETRY_ENCODINGS = 'WKB', write_covering_bbox: bool = False, schema_version: vibespatial.api.io.arrow.SUPPORTED_VERSIONS_LITERAL | None = None, **kwargs) None¶
Write a GeoDataFrame to the Parquet format.
By default, all geometry columns present are serialized to WKB format in the file.
Requires ‘pyarrow’.
Added in version 0.8.
Parameters¶
path : str, path object index : bool, default None
If
True, always include the dataframe’s index(es) as columns in the file output. IfFalse, the index(es) will not be written to the file. IfNone, the index(ex) will be included as columns in the file output except RangeIndex which is stored as metadata only.- compression{‘snappy’, ‘gzip’, ‘brotli’, ‘lz4’, ‘zstd’, None}, default ‘snappy’
Name of the compression to use. Use
Nonefor no compression.- geometry_encoding{‘WKB’, ‘geoarrow’}, default ‘WKB’
The encoding to use for the geometry columns. Defaults to “WKB” for maximum interoperability. Specify “geoarrow” to use one of the native GeoArrow-based single-geometry type encodings. Note: the “geoarrow” option is part of the newer GeoParquet 1.1 specification, should be considered as experimental, and may not be supported by all readers.
- write_covering_bboxbool, default False
Writes the bounding box column for each row entry with column name ‘bbox’. Writing a bbox column can be computationally expensive, but allows you to specify a bbox in : func:read_parquet for filtered reading. Note: this bbox column is part of the newer GeoParquet 1.1 specification and should be considered as experimental. While writing the column is backwards compatible, using it for filtering may not be supported by all readers.
- schema_version{‘0.1.0’, ‘0.4.0’, ‘1.0.0’, ‘1.1.0’, None}
GeoParquet specification version; if not provided, will default to latest supported stable version (1.0.0).
- kwargs
Additional keyword arguments passed to
pyarrow.parquet.write_table().
Examples¶
>>> gdf.to_parquet('data.parquet')
See Also¶
GeoDataFrame.to_feather : write GeoDataFrame to feather GeoDataFrame.to_file : write GeoDataFrame to file
- to_feather(path: os.PathLike, index: bool | None = None, compression: str | None = None, schema_version: vibespatial.api.io.arrow.SUPPORTED_VERSIONS_LITERAL | None = None, **kwargs)¶
Write a GeoDataFrame to the Feather format.
Any geometry columns present are serialized to WKB format in the file.
Requires ‘pyarrow’ >= 0.17.
Added in version 0.8.
Parameters¶
path : str, path object index : bool, default None
If
True, always include the dataframe’s index(es) as columns in the file output. IfFalse, the index(es) will not be written to the file. IfNone, the index(ex) will be included as columns in the file output except RangeIndex which is stored as metadata only.- compression{‘zstd’, ‘lz4’, ‘uncompressed’}, optional
Name of the compression to use. Use
"uncompressed"for no compression. By default uses LZ4 if available, otherwise uncompressed.- schema_version{‘0.1.0’, ‘0.4.0’, ‘1.0.0’, ‘1.1.0’ None}
GeoParquet specification version; if not provided will default to latest supported stable version (1.0.0).
- kwargs
Additional keyword arguments passed to
pyarrow.feather.write_feather().
Examples¶
>>> gdf.to_feather('data.feather')
See Also¶
GeoDataFrame.to_parquet : write GeoDataFrame to parquet GeoDataFrame.to_file : write GeoDataFrame to file
- to_file(filename: os.PathLike | IO, driver: str | None = None, schema: dict | None = None, index: bool | None = None, **kwargs)¶
Write the
GeoDataFrameto a file.By default, an ESRI shapefile is written, but any OGR data source supported by Pyogrio or Fiona can be written. A dictionary of supported OGR providers is available via:
>>> import pyogrio >>> pyogrio.list_drivers()
Parameters¶
- filenamestring
File path or file handle to write to. The path may specify a GDAL VSI scheme.
- driverstring, default None
The OGR format driver used to write the vector file. If not specified, it attempts to infer it from the file extension. If no extension is specified, it saves ESRI Shapefile to a folder.
- schemadict, default None
If specified, the schema dictionary is passed to Fiona to better control how the file is written. If None, GeoPandas will determine the schema based on each column’s dtype. Not supported for the “pyogrio” engine.
- indexbool, default None
If True, write index into one or more columns (for MultiIndex). Default None writes the index into one or more columns only if the index is named, is a MultiIndex, or has a non-integer data type. If False, no index is written.
Added in version 0.7: Previously the index was not written.
- modestring, default ‘w’
The write mode, ‘w’ to overwrite the existing file and ‘a’ to append. Not all drivers support appending. The drivers that support appending are listed in fiona.supported_drivers or https://github.com/Toblerity/Fiona/blob/master/fiona/drvsupport.py
- crspyproj.CRS, default None
If specified, the CRS is passed to Fiona to better control how the file is written. If None, GeoPandas will determine the crs based on crs df attribute. The value can be anything accepted by
pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string. The keyword is not supported for the “pyogrio” engine.- enginestr, “pyogrio” or “fiona”
The underlying library that is used to write the file. Currently, the supported options are “pyogrio” and “fiona”. Defaults to “pyogrio” if installed, otherwise tries “fiona”.
- metadatadict[str, str], default None
Optional metadata to be stored in the file. Keys and values must be strings. Supported only for “GPKG” driver.
- **kwargs :
Keyword args to be passed to the engine, and can be used to write to multi-layer data, store data within archives (zip files), etc. In case of the “pyogrio” engine, the keyword arguments are passed to pyogrio.write_dataframe. In case of the “fiona” engine, the keyword arguments are passed to fiona.open`. For more information on possible keywords, type:
import pyogrio; help(pyogrio.write_dataframe).
Notes¶
The format drivers will attempt to detect the encoding of your data, but may fail. In this case, the proper encoding can be specified explicitly by using the encoding keyword parameter, e.g.
encoding='utf-8'.See Also¶
GeoSeries.to_file GeoDataFrame.to_postgis : write GeoDataFrame to PostGIS database GeoDataFrame.to_parquet : write GeoDataFrame to parquet GeoDataFrame.to_feather : write GeoDataFrame to feather
Examples¶
>>> gdf.to_file('dataframe.shp')
>>> gdf.to_file('dataframe.gpkg', driver='GPKG', layer='name')
>>> gdf.to_file('dataframe.geojson', driver='GeoJSON')
With selected drivers you can also append to a file with mode=”a”:
>>> gdf.to_file('dataframe.shp', mode="a")
Using the engine-specific keyword arguments it is possible to e.g. create a spatialite file with a custom layer name:
>>> gdf.to_file( ... 'dataframe.sqlite', driver='SQLite', spatialite=True, layer='test' ... )
- set_crs(crs: Any | None = ..., epsg: int | None = ..., inplace: Literal[True] = ..., allow_override: bool = ...) None¶
- set_crs(crs: Any | None = ..., epsg: int | None = ..., inplace: Literal[False] = ..., allow_override: bool = ...) GeoDataFrame
Set the Coordinate Reference System (CRS) of the
GeoDataFrame.If there are multiple geometry columns within the GeoDataFrame, only the CRS of the active geometry column is set.
Pass
Noneto remove CRS from the active geometry column.Notes¶
The underlying geometries are not transformed to this CRS. To transform the geometries to a new CRS, use the
to_crsmethod.Parameters¶
- crspyproj.CRS | None, optional
The value can be anything accepted by
pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.- epsgint, optional
EPSG code specifying the projection.
- inplacebool, default False
If True, the CRS of the GeoDataFrame will be changed in place (while still returning the result) instead of making a copy of the GeoDataFrame.
- allow_overridebool, default False
If the the GeoDataFrame already has a CRS, allow to replace the existing CRS, even when both are not equal.
Examples¶
>>> from shapely.geometry import Point >>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]} >>> gdf = geopandas.GeoDataFrame(d) >>> gdf col1 geometry 0 name1 POINT (1 2) 1 name2 POINT (2 1)
Setting CRS to a GeoDataFrame without one:
>>> gdf.crs is None True
>>> gdf = gdf.set_crs('epsg:3857') >>> gdf.crs <Projected CRS: EPSG:3857> Name: WGS 84 / Pseudo-Mercator Axis Info [cartesian]: - X[east]: Easting (metre) - Y[north]: Northing (metre) Area of Use: - name: World - 85°S to 85°N - bounds: (-180.0, -85.06, 180.0, 85.06) Coordinate Operation: - name: Popular Visualisation Pseudo-Mercator - method: Popular Visualisation Pseudo Mercator Datum: World Geodetic System 1984 - Ellipsoid: WGS 84 - Prime Meridian: Greenwich
Overriding existing CRS:
>>> gdf = gdf.set_crs(4326, allow_override=True)
Without
allow_override=True,set_crsreturns an error if you try to override CRS.See Also¶
GeoDataFrame.to_crs : re-project to another CRS
- to_crs(crs: Any | None = ..., epsg: int | None = ..., inplace: Literal[False] = ...) GeoDataFrame¶
- to_crs(crs: Any | None = ..., epsg: int | None = ..., inplace: Literal[True] = ...) None
Transform geometries to a new coordinate reference system.
Transform all geometries in an active geometry column to a different coordinate reference system. The
crsattribute on the current GeoSeries must be set. Eithercrsorepsgmay be specified for output.This method will transform all points in all objects. It has no notion of projecting entire geometries. All segments joining points are assumed to be lines in the current projection, not geodesics. Objects crossing the dateline (or other projection boundary) will have undesirable behavior.
Parameters¶
- crspyproj.CRS, optional if epsg is specified
The value can be anything accepted by
pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.- epsgint, optional if crs is specified
EPSG code specifying output projection.
- inplacebool, optional, default: False
Whether to return a new GeoDataFrame or do the transformation in place.
Returns¶
GeoDataFrame
Examples¶
>>> from shapely.geometry import Point >>> d = {'col1': ['name1', 'name2'], 'geometry': [Point(1, 2), Point(2, 1)]} >>> gdf = geopandas.GeoDataFrame(d, crs=4326) >>> gdf col1 geometry 0 name1 POINT (1 2) 1 name2 POINT (2 1) >>> gdf.crs <Geographic 2D CRS: EPSG:4326> Name: WGS 84 Axis Info [ellipsoidal]: - Lat[north]: Geodetic latitude (degree) - Lon[east]: Geodetic longitude (degree) Area of Use: - name: World - bounds: (-180.0, -90.0, 180.0, 90.0) Datum: World Geodetic System 1984 - Ellipsoid: WGS 84 - Prime Meridian: Greenwich
>>> gdf = gdf.to_crs(3857) >>> gdf col1 geometry 0 name1 POINT (111319.491 222684.209) 1 name2 POINT (222638.982 111325.143) >>> gdf.crs <Projected CRS: EPSG:3857> Name: WGS 84 / Pseudo-Mercator Axis Info [cartesian]: - X[east]: Easting (metre) - Y[north]: Northing (metre) Area of Use: - name: World - 85°S to 85°N - bounds: (-180.0, -85.06, 180.0, 85.06) Coordinate Operation: - name: Popular Visualisation Pseudo-Mercator - method: Popular Visualisation Pseudo Mercator Datum: World Geodetic System 1984 - Ellipsoid: WGS 84 - Prime Meridian: Greenwich
See Also¶
GeoDataFrame.set_crs : assign CRS without re-projection
- estimate_utm_crs(datum_name: str = 'WGS 84') pyproj.CRS¶
Return the estimated UTM CRS based on the bounds of the dataset.
Added in version 0.9.
Parameters¶
- datum_namestr, optional
The name of the datum to use in the query. Default is WGS 84.
Returns¶
pyproj.CRS
Examples¶
>>> import geodatasets >>> df = geopandas.read_file( ... geodatasets.get_path("geoda.chicago_health") ... ) >>> df.estimate_utm_crs() <Derived Projected CRS: EPSG:32616> Name: WGS 84 / UTM zone 16N Axis Info [cartesian]: - E[east]: Easting (metre) - N[north]: Northing (metre) Area of Use: - name: Between 90°W and 84°W, northern hemisphere between equator and 84°N... - bounds: (-90.0, 0.0, -84.0, 84.0) Coordinate Operation: - name: UTM zone 16N - method: Transverse Mercator Datum: World Geodetic System 1984 ensemble - Ellipsoid: WGS 84 - Prime Meridian: Greenwich
- copy(deep: bool = True) GeoDataFrame¶
Make a copy of this object’s indices and data.
When
deep=True(default), a new object will be created with a copy of the calling object’s data and indices. Modifications to the data or indices of the copy will not be reflected in the original object (see notes below).When
deep=False, a new object will be created without copying the calling object’s data or index (only references to the data and index are copied). With Copy-on-Write, changes to the original will not be reflected in the shallow copy (and vice versa). The shallow copy uses a lazy (deferred) copy mechanism that copies the data only when any changes to the original or shallow copy are made, ensuring memory efficiency while maintaining data integrity.Note
In pandas versions prior to 3.0, the default behavior without Copy-on-Write was different: changes to the original were reflected in the shallow copy (and vice versa). See the Copy-on-Write user guide for more information.
Parameters¶
- deepbool, default True
Make a deep copy, including a copy of the data and the indices. With
deep=Falseneither the indices nor the data are copied.
Returns¶
- Series or DataFrame
Object type matches caller.
See Also¶
copy.copy : Return a shallow copy of an object. copy.deepcopy : Return a deep copy of an object.
Notes¶
When
deep=True, data is copied but actual Python objects will not be copied recursively, only the reference to the object. This is in contrast to copy.deepcopy in the Standard Library, which recursively copies object data (see examples below).While
Indexobjects are copied whendeep=True, the underlying numpy array is not copied for performance reasons. SinceIndexis immutable, the underlying data can be safely shared and a copy is not needed.Since pandas is not thread safe, see the gotchas when copying in a threading environment.
Copy-on-Write protects shallow copies against accidental modifications. This means that any changes to the copied data would make a new copy of the data upon write (and vice versa). Changes made to either the original or copied variable would not be reflected in the counterpart. See Copy_on_Write for more information.
Examples¶
>>> s = pd.Series([1, 2], index=["a", "b"]) >>> s a 1 b 2 dtype: int64
>>> s_copy = s.copy(deep=True) >>> s_copy a 1 b 2 dtype: int64
Due to Copy-on-Write, shallow copies still protect data modifications. Note shallow does not get modified below.
>>> s = pd.Series([1, 2], index=["a", "b"]) >>> shallow = s.copy(deep=False) >>> s.iloc[1] = 200 >>> shallow a 1 b 2 dtype: int64
When the data has object dtype, even a deep copy does not copy the underlying Python objects. Updating a nested data object will be reflected in the deep copy.
>>> s = pd.Series([[1, 2], [3, 4]]) >>> deep = s.copy() >>> s[0][0] = 10 >>> s 0 [10, 2] 1 [3, 4] dtype: object >>> deep 0 [10, 2] 1 [3, 4] dtype: object
- apply(func, axis=0, raw: bool = False, result_type=None, args=(), **kwargs)¶
Apply a function along an axis of the DataFrame.
Objects passed to the function are Series objects whose index is either the DataFrame’s index (
axis=0) or the DataFrame’s columns (axis=1). By default (result_type=None), the final return type is inferred from the return type of the applied function. Otherwise, it depends on the result_type argument. The return type of the applied function is inferred based on the first computed result obtained after applying the function to a Series object.Parameters¶
- funcfunction
Function to apply to each column or row.
- axis{0 or ‘index’, 1 or ‘columns’}, default 0
Axis along which the function is applied:
0 or ‘index’: apply function to each column.
1 or ‘columns’: apply function to each row.
- rawbool, default False
Determines if row or column is passed as a Series or ndarray object:
False: passes each row or column as a Series to the function.True: the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance.
Note
When
raw=True, the result dtype is inferred from the first returned value.- result_type{‘expand’, ‘reduce’, ‘broadcast’, None}, default None
These only act when
axis=1(columns):‘expand’ : list-like results will be turned into columns.
‘reduce’ : returns a Series if possible rather than expanding list-like results. This is the opposite of ‘expand’.
‘broadcast’ : results will be broadcast to the original shape of the DataFrame, the original index and columns will be retained.
The default behaviour (None) depends on the return value of the applied function: list-like results will be returned as a Series of those. However if the apply function returns a Series these are expanded to columns.
- argstuple
Positional arguments to pass to func in addition to the array/series.
- by_rowFalse or “compat”, default “compat”
Only has an effect when
funcis a listlike or dictlike of funcs and the func isn’t a string. If “compat”, will if possible first translate the func into pandas methods (e.g.Series().apply(np.sum)will be translated toSeries().sum()). If that doesn’t work, will try call to apply again withby_row=Trueand if that fails, will call apply again withby_row=False(backward compatible). If False, the funcs will be passed the whole Series at once.Added in version 2.1.0.
- enginedecorator or {‘python’, ‘numba’}, optional
Choose the execution engine to use. If not provided the function will be executed by the regular Python interpreter.
Other options include JIT compilers such Numba and Bodo, which in some cases can speed up the execution. To use an executor you can provide the decorators
numba.jit,numba.njitorbodo.jit. You can also provide the decorator with parameters, likenumba.jit(nogit=True).Not all functions can be executed with all execution engines. In general, JIT compilers will require type stability in the function (no variable should change data type during the execution). And not all pandas and NumPy APIs are supported. Check the engine documentation [1] and [2] for limitations.
Warning
String parameters will stop being supported in a future pandas version.
Added in version 2.2.0.
- engine_kwargsdict
Pass keyword arguments to the engine. This is currently only used by the numba engine, see the documentation for the engine argument for more information.
- **kwargs
Additional keyword arguments to pass as keywords arguments to func.
Returns¶
- Series or DataFrame
Result of applying
funcalong the given axis of the DataFrame.
See Also¶
DataFrame.map: For elementwise operations. DataFrame.aggregate: Only perform aggregating type operations. DataFrame.transform: Only perform transforming type operations.
Notes¶
Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See gotchas.udf-mutation for more details.
References¶
Examples¶
>>> df = pd.DataFrame([[4, 9]] * 3, columns=["A", "B"]) >>> df A B 0 4 9 1 4 9 2 4 9
Using a numpy universal function (in this case the same as
np.sqrt(df)):>>> df.apply(np.sqrt) A B 0 2.0 3.0 1 2.0 3.0 2 2.0 3.0
Using a reducing function on either axis
>>> df.apply(np.sum, axis=0) A 12 B 27 dtype: int64
>>> df.apply(np.sum, axis=1) 0 13 1 13 2 13 dtype: int64
Returning a list-like will result in a Series
>>> df.apply(lambda x: [1, 2], axis=1) 0 [1, 2] 1 [1, 2] 2 [1, 2] dtype: object
Passing
result_type='expand'will expand list-like results to columns of a Dataframe>>> df.apply(lambda x: [1, 2], axis=1, result_type="expand") 0 1 0 1 2 1 1 2 2 1 2
Returning a Series inside the function is similar to passing
result_type='expand'. The resulting column names will be the Series index.>>> df.apply(lambda x: pd.Series([1, 2], index=["foo", "bar"]), axis=1) foo bar 0 1 2 1 1 2 2 1 2
Passing
result_type='broadcast'will ensure the same shape result, whether list-like or scalar is returned by the function, and broadcast it along the axis. The resulting column names will be the originals.>>> df.apply(lambda x: [1, 2], axis=1, result_type="broadcast") A B 0 1 2 1 1 2 2 1 2
Advanced users can speed up their code by using a Just-in-time (JIT) compiler with
apply. The main JIT compilers available for pandas are Numba and Bodo. In general, JIT compilation is only possible when the function passed toapplyhas type stability (variables in the function do not change their type during the execution).>>> import bodo >>> df.apply(lambda x: x.A + x.B, axis=1, engine=bodo.jit)
Note that JIT compilation is only recommended for functions that take a significant amount of time to run. Fast functions are unlikely to run faster with JIT compilation.
- dissolve(by: str | None = None, aggfunc='first', as_index: bool = True, level=None, sort: bool = True, observed: bool = False, dropna: bool = True, method: Literal['unary', 'coverage', 'disjoint_subset'] = 'unary', grid_size: float | None = None, **kwargs) GeoDataFrame¶
Dissolve geometries within groupby into single observation. This is accomplished by applying the union_all method to all geometries within a groupself.
Observations associated with each groupby group will be aggregated using the aggfunc.
Parameters¶
- bystr or list-like, default None
Column(s) whose values define the groups to be dissolved. If None, the entire GeoDataFrame is considered as a single group. If a list-like object is provided, the values in the list are treated as categorical labels, and polygons will be combined based on the equality of these categorical labels.
- aggfuncfunction or string, default “first”
Aggregation function for manipulation of data associated with each group. Passed to pandas groupby.agg method. Accepted combinations are:
function
string function name
list of functions and/or function names, e.g. [np.sum, ‘mean’]
dict of axis labels -> functions, function names or list of such.
- as_indexboolean, default True
If true, groupby columns become index of result.
- levelint or str or sequence of int or sequence of str, default None
If the axis is a MultiIndex (hierarchical), group by a particular level or levels.
- sortbool, default True
Sort group keys. Get better performance by turning this off. Note this does not influence the order of observations within each group. Groupby preserves the order of rows within each group.
- observedbool, default False
This only applies if any of the groupers are Categoricals. If True: only show observed values for categorical groupers. If False: show all values for categorical groupers.
- dropnabool, default True
If True, and if group keys contain NA values, NA values together with row/column will be dropped. If False, NA values will also be treated as the key in groups.
- methodstr (default
"unary") The method to use for the union. Options are:
"unary": use the unary union algorithm. This option is the most robust but can be slow for large numbers of geometries (default)."coverage": use the coverage union algorithm. This option is optimized for non-overlapping polygons and can be significantly faster than the unary union algorithm. However, it can produce invalid geometries if the polygons overlap."disjoint_subset:: use the disjoint subset union algorithm. This option is optimized for inputs that can be divided into subsets that do not intersect. If there is only one such subset, performance can be expected to be worse than"unary". Requires Shapely >= 2.1.
- grid_sizefloat, default None
When grid size is specified, a fixed-precision space is used to perform the union operations. This can be useful when unioning geometries that are not perfectly snapped or to avoid geometries not being unioned because of robustness issues. The inputs are first snapped to a grid of the given size. When a line segment of a geometry is within tolerance off a vertex of another geometry, this vertex will be inserted in the line segment. Finally, the result vertices are computed on the same grid. Is only supported for
method"unary". If None, the highest precision of the inputs will be used. Defaults to None.Added in version 1.1.0.
- **kwargs :
Keyword arguments to be passed to the pandas DataFrameGroupby.agg method which is used by dissolve. In particular, numeric_only may be supplied, which will be required in pandas 2.0 for certain aggfuncs.
Added in version 0.13.0.
Returns¶
GeoDataFrame
Examples¶
>>> from shapely.geometry import Point >>> d = { ... "col1": ["name1", "name2", "name1"], ... "geometry": [Point(1, 2), Point(2, 1), Point(0, 1)], ... } >>> gdf = geopandas.GeoDataFrame(d, crs=4326) >>> gdf col1 geometry 0 name1 POINT (1 2) 1 name2 POINT (2 1) 2 name1 POINT (0 1)
>>> dissolved = gdf.dissolve('col1') >>> dissolved geometry col1 name1 MULTIPOINT ((0 1), (1 2)) name2 POINT (2 1)
See Also¶
GeoDataFrame.explode : explode multi-part geometries into single geometries
- explode(column: str | None = None, ignore_index: bool = False, index_parts: bool = False, **kwargs) GeoDataFrame | pandas.DataFrame¶
Explode multi-part geometries into multiple single geometries.
Each row containing a multi-part geometry will be split into multiple rows with single geometries, thereby increasing the vertical size of the GeoDataFrame.
Parameters¶
- columnstring, default None
Column to explode. In the case of a geometry column, multi-part geometries are converted to single-part. If None, the active geometry column is used.
- ignore_indexbool, default False
If True, the resulting index will be labelled 0, 1, …, n - 1, ignoring index_parts.
- index_partsboolean, default False
If True, the resulting index will be a multi-index (original index with an additional level indicating the multiple geometries: a new zero-based index for each single part geometry per multi-part geometry).
Returns¶
- GeoDataFrame
Exploded geodataframe with each single geometry as a separate entry in the geodataframe.
Examples¶
>>> from shapely.geometry import MultiPoint >>> d = { ... "col1": ["name1", "name2"], ... "geometry": [ ... MultiPoint([(1, 2), (3, 4)]), ... MultiPoint([(2, 1), (0, 0)]), ... ], ... } >>> gdf = geopandas.GeoDataFrame(d, crs=4326) >>> gdf col1 geometry 0 name1 MULTIPOINT ((1 2), (3 4)) 1 name2 MULTIPOINT ((2 1), (0 0))
>>> exploded = gdf.explode(index_parts=True) >>> exploded col1 geometry 0 0 name1 POINT (1 2) 1 name1 POINT (3 4) 1 0 name2 POINT (2 1) 1 name2 POINT (0 0)
>>> exploded = gdf.explode(index_parts=False) >>> exploded col1 geometry 0 name1 POINT (1 2) 0 name1 POINT (3 4) 1 name2 POINT (2 1) 1 name2 POINT (0 0)
>>> exploded = gdf.explode(ignore_index=True) >>> exploded col1 geometry 0 name1 POINT (1 2) 1 name1 POINT (3 4) 2 name2 POINT (2 1) 3 name2 POINT (0 0)
See Also¶
GeoDataFrame.dissolve : dissolve geometries into a single observation.
- to_postgis(name: str, con, schema: str | None = None, if_exists: Literal['fail', 'replace', 'append'] = 'fail', index: bool = False, index_label: collections.abc.Iterable[str] | str | None = None, chunksize: int | None = None, dtype=None) None¶
Upload GeoDataFrame into PostGIS database.
This method requires SQLAlchemy and GeoAlchemy2, and a PostgreSQL Python driver (psycopg or psycopg2) to be installed.
It is also possible to use
to_file()to write to a database. Especially for file geodatabases like GeoPackage or SpatiaLite this can be easier.Parameters¶
- namestr
Name of the target table.
- consqlalchemy.engine.Connection or sqlalchemy.engine.Engine
Active connection to the PostGIS database.
- if_exists{‘fail’, ‘replace’, ‘append’}, default ‘fail’
How to behave if the table already exists:
fail: Raise a ValueError.
replace: Drop the table before inserting new values.
append: Insert new values to the existing table.
- schemastring, optional
Specify the schema. If None, use default schema: ‘public’.
- indexbool, default False
Write DataFrame index as a column. Uses index_label as the column name in the table.
- index_labelstring or sequence, default None
Column label for index column(s). If None is given (default) and index is True, then the index names are used.
- chunksizeint, optional
Rows will be written in batches of this size at a time. By default, all rows will be written at once.
- dtypedict of column name to SQL type, default None
Specifying the datatype for columns. The keys should be the column names and the values should be the SQLAlchemy types.
Examples¶
>>> from sqlalchemy import create_engine >>> engine = create_engine("postgresql://myusername:mypassword@myhost:5432/mydatabase") >>> gdf.to_postgis("my_table", engine)
See Also¶
GeoDataFrame.to_file : write GeoDataFrame to file read_postgis : read PostGIS database to GeoDataFrame
- plot¶
- explore(*args, **kwargs) folium.Map¶
- sjoin(df: GeoDataFrame, how: Literal['left', 'right', 'inner', 'outer'] = 'inner', predicate: str = 'intersects', lsuffix: str = 'left', rsuffix: str = 'right', **kwargs) GeoDataFrame¶
Spatial join of two GeoDataFrames.
See the User Guide page ../../user_guide/mergingdata for details.
Parameters¶
df : GeoDataFrame how : string, default ‘inner’
The type of join:
‘left’: use keys from left_df; retain only left_df geometry column
‘right’: use keys from right_df; retain only right_df geometry column
‘inner’: use intersection of keys from both dfs; retain only left_df geometry column
‘outer’: use union of keys from both dfs; retain a single active geometry column by preferring left geometries and filling unmatched right-only rows from the right geometry column
- predicatestring, default ‘intersects’
Binary predicate. Valid values are determined by the spatial index used. You can check the valid values in left_df or right_df as
left_df.sindex.valid_query_predicatesorright_df.sindex.valid_query_predicatesAvailable predicates include:
'intersects': True if geometries intersect (boundaries and interiors)'within': True if left geometry is completely within right geometry'contains': True if left geometry completely contains right geometry'contains_properly': True if left geometry contains right geometry and their boundaries do not touch'overlaps': True if geometries overlap but neither contains the other'crosses':True if geometries cross (interiors intersect but neither contains the other, with intersection dimension less than max dimension)'touches': True if geometries touch at boundaries but interiors don’t'covers': True if left geometry covers right geometry (every point of right is a point of left)'covered_by': True if left geometry is covered by right geometry'dwithin': True if geometries are within specified distance (requires distance parameter)
- lsuffixstring, default ‘left’
Suffix to apply to overlapping column names (left GeoDataFrame).
- rsuffixstring, default ‘right’
Suffix to apply to overlapping column names (right GeoDataFrame).
- distancenumber or array_like, optional
Distance(s) around each input geometry within which to query the tree for the ‘dwithin’ predicate. If array_like, must be one-dimesional with length equal to length of left GeoDataFrame. Required if
predicate='dwithin'.- on_attributestring, list or tuple
Column name(s) to join on as an additional join restriction on top of the spatial predicate. These must be found in both DataFrames. If set, observations are joined only if the predicate applies and values in specified columns match.
Examples¶
>>> import geodatasets >>> chicago = geopandas.read_file( ... geodatasets.get_path("geoda.chicago_commpop") ... ) >>> groceries = geopandas.read_file( ... geodatasets.get_path("geoda.groceries") ... ).to_crs(chicago.crs)
>>> chicago.head() community ... geometry 0 DOUGLAS ... MULTIPOLYGON (((-87.60914 41.84469, -87.60915 ... 1 OAKLAND ... MULTIPOLYGON (((-87.59215 41.81693, -87.59231 ... 2 FULLER PARK ... MULTIPOLYGON (((-87.62880 41.80189, -87.62879 ... 3 GRAND BOULEVARD ... MULTIPOLYGON (((-87.60671 41.81681, -87.60670 ... 4 KENWOOD ... MULTIPOLYGON (((-87.59215 41.81693, -87.59215 ...
[5 rows x 9 columns]
>>> groceries.head() OBJECTID Ycoord ... Category geometry 0 16 41.973266 ... NaN MULTIPOINT ((-87.65661 41.97321)) 1 18 41.696367 ... NaN MULTIPOINT ((-87.68136 41.69713)) 2 22 41.868634 ... NaN MULTIPOINT ((-87.63918 41.86847)) 3 23 41.877590 ... new MULTIPOINT ((-87.65495 41.87783)) 4 27 41.737696 ... NaN MULTIPOINT ((-87.62715 41.73623)) [5 rows x 8 columns]
>>> groceries_w_communities = groceries.sjoin(chicago) >>> groceries_w_communities[["OBJECTID", "community", "geometry"]].head() OBJECTID community geometry 0 16 UPTOWN MULTIPOINT ((-87.65661 41.97321)) 1 18 MORGAN PARK MULTIPOINT ((-87.68136 41.69713)) 2 22 NEAR WEST SIDE MULTIPOINT ((-87.63918 41.86847)) 3 23 NEAR WEST SIDE MULTIPOINT ((-87.65495 41.87783)) 4 27 CHATHAM MULTIPOINT ((-87.62715 41.73623))
Notes¶
Every operation in GeoPandas is planar, i.e. the potential third dimension is not taken into account.
See Also¶
GeoDataFrame.sjoin_nearest : nearest neighbor join sjoin : equivalent top-level function
- sjoin_nearest(right: GeoDataFrame, how: Literal['left', 'right', 'inner'] = 'inner', max_distance: float | None = None, lsuffix: str = 'left', rsuffix: str = 'right', distance_col: str | None = None, exclusive: bool = False) GeoDataFrame¶
Spatial join of two GeoDataFrames based on the distance between their geometries.
Results will include multiple output records for a single input record where there are multiple equidistant nearest or intersected neighbors.
See the User Guide page https://geopandas.readthedocs.io/en/latest/docs/user_guide/mergingdata.html for more details.
Parameters¶
right : GeoDataFrame how : string, default ‘inner’
The type of join:
‘left’: use keys from left_df; retain only left_df geometry column
‘right’: use keys from right_df; retain only right_df geometry column
‘inner’: use intersection of keys from both dfs; retain only left_df geometry column
- max_distancefloat, default None
Maximum distance within which to query for nearest geometry. Must be greater than 0. The max_distance used to search for nearest items in the tree may have a significant impact on performance by reducing the number of input geometries that are evaluated for nearest items in the tree.
- lsuffixstring, default ‘left’
Suffix to apply to overlapping column names (left GeoDataFrame).
- rsuffixstring, default ‘right’
Suffix to apply to overlapping column names (right GeoDataFrame).
- distance_colstring, default None
If set, save the distances computed between matching geometries under a column of this name in the joined GeoDataFrame.
- exclusivebool, optional, default False
If True, the nearest geometries that are equal to the input geometry will not be returned, default False.
Examples¶
>>> import geodatasets >>> groceries = geopandas.read_file( ... geodatasets.get_path("geoda.groceries") ... ) >>> chicago = geopandas.read_file( ... geodatasets.get_path("geoda.chicago_health") ... ).to_crs(groceries.crs)
>>> chicago.head() ComAreaID ... geometry 0 35 ... POLYGON ((-87.60914 41.84469, -87.60915 41.844... 1 36 ... POLYGON ((-87.59215 41.81693, -87.59231 41.816... 2 37 ... POLYGON ((-87.62880 41.80189, -87.62879 41.801... 3 38 ... POLYGON ((-87.60671 41.81681, -87.60670 41.816... 4 39 ... POLYGON ((-87.59215 41.81693, -87.59215 41.816... [5 rows x 87 columns]
>>> groceries.head() OBJECTID Ycoord ... Category geometry 0 16 41.973266 ... NaN MULTIPOINT ((-87.65661 41.97321)) 1 18 41.696367 ... NaN MULTIPOINT ((-87.68136 41.69713)) 2 22 41.868634 ... NaN MULTIPOINT ((-87.63918 41.86847)) 3 23 41.877590 ... new MULTIPOINT ((-87.65495 41.87783)) 4 27 41.737696 ... NaN MULTIPOINT ((-87.62715 41.73623)) [5 rows x 8 columns]
>>> groceries_w_communities = groceries.sjoin_nearest(chicago) >>> groceries_w_communities[["Chain", "community", "geometry"]].head(2) Chain community geometry 0 VIET HOA PLAZA UPTOWN MULTIPOINT ((1168268.672 1933554.35)) 1 COUNTY FAIR FOODS MORGAN PARK MULTIPOINT ((1162302.618 1832900.224))
To include the distances:
>>> groceries_w_communities = groceries.sjoin_nearest(chicago, distance_col="distances") >>> groceries_w_communities[["Chain", "community", "distances"]].head(2) Chain community distances 0 VIET HOA PLAZA UPTOWN 0.0 1 COUNTY FAIR FOODS MORGAN PARK 0.0
In the following example, we get multiple groceries for Uptown because all results are equidistant (in this case zero because they intersect). In fact, we get 4 results in total:
>>> chicago_w_groceries = groceries.sjoin_nearest(chicago, distance_col="distances", how="right") >>> uptown_results = chicago_w_groceries[chicago_w_groceries["community"] == "UPTOWN"] >>> uptown_results[["Chain", "community"]] Chain community 30 VIET HOA PLAZA UPTOWN 30 JEWEL OSCO UPTOWN 30 TARGET UPTOWN 30 Mariano's UPTOWN
See Also¶
GeoDataFrame.sjoin : binary predicate joins sjoin_nearest : equivalent top-level function
Notes¶
Since this join relies on distances, results will be inaccurate if your geometries are in a geographic CRS.
Every operation in GeoPandas is planar, i.e. the potential third dimension is not taken into account.
- clip(mask, keep_geom_type: bool = False, sort: bool = False) GeoDataFrame¶
Clip points, lines, or polygon geometries to the mask extent.
Both layers must be in the same Coordinate Reference System (CRS). The GeoDataFrame will be clipped to the full extent of the
maskobject.If there are multiple polygons in mask, data from the GeoDataFrame will be clipped to the total boundary of all polygons in mask.
Parameters¶
- maskGeoDataFrame, GeoSeries, (Multi)Polygon, list-like
Polygon vector layer used to clip the GeoDataFrame. The mask’s geometry is dissolved into one geometric feature and intersected with GeoDataFrame. If the mask is list-like with four elements
(minx, miny, maxx, maxy),clipwill use a faster rectangle clipping (clip_by_rect()), possibly leading to slightly different results.- keep_geom_typeboolean, default False
If True, return only geometries of original type in case of intersection resulting in multiple geometry types or GeometryCollections. If False, return all resulting geometries (potentially mixed types).
- sortboolean, default False
If True, the order of rows in the clipped GeoDataFrame will be preserved at small performance cost. If False the order of rows in the clipped GeoDataFrame will be random.
Returns¶
- GeoDataFrame
Vector data (points, lines, polygons) from the GeoDataFrame clipped to polygon boundary from mask.
See Also¶
clip : equivalent top-level function
Examples¶
Clip points (grocery stores) with polygons (the Near West Side community):
>>> import geodatasets >>> chicago = geopandas.read_file( ... geodatasets.get_path("geoda.chicago_health") ... ) >>> near_west_side = chicago[chicago["community"] == "NEAR WEST SIDE"] >>> groceries = geopandas.read_file( ... geodatasets.get_path("geoda.groceries") ... ).to_crs(chicago.crs) >>> groceries.shape (148, 8)
>>> nws_groceries = groceries.clip(near_west_side) >>> nws_groceries.shape (7, 8)
- overlay(right: GeoDataFrame, how: Literal['intersection', 'union', 'identity', 'symmetric_difference', 'difference'] = 'intersection', keep_geom_type: bool | None = None, make_valid: bool = True)¶
Perform spatial overlay between GeoDataFrames.
Currently only supports data GeoDataFrames with uniform geometry types, i.e. containing only (Multi)Polygons, or only (Multi)Points, or a combination of (Multi)LineString and LinearRing shapes. Implements several methods that are all effectively subsets of the union.
See the User Guide page ../../user_guide/set_operations for details.
Parameters¶
right : GeoDataFrame how : string
Method of spatial overlay: ‘intersection’, ‘union’, ‘identity’, ‘symmetric_difference’ or ‘difference’.
- keep_geom_typebool
If True, return only geometries of the same geometry type the GeoDataFrame has, if False, return all resulting geometries. Default is None, which will set keep_geom_type to True but warn upon dropping geometries.
- make_validbool, default True
If True, any invalid input geometries are corrected with a call to make_valid(), if False, a ValueError is raised if any input geometries are invalid.
Returns¶
- dfGeoDataFrame
GeoDataFrame with new set of polygons and attributes resulting from the overlay
Examples¶
>>> from shapely.geometry import Polygon >>> polys1 = geopandas.GeoSeries([Polygon([(0,0), (2,0), (2,2), (0,2)]), ... Polygon([(2,2), (4,2), (4,4), (2,4)])]) >>> polys2 = geopandas.GeoSeries([Polygon([(1,1), (3,1), (3,3), (1,3)]), ... Polygon([(3,3), (5,3), (5,5), (3,5)])]) >>> df1 = geopandas.GeoDataFrame({'geometry': polys1, 'df1_data':[1,2]}) >>> df2 = geopandas.GeoDataFrame({'geometry': polys2, 'df2_data':[1,2]})
>>> df1.overlay(df2, how='union') df1_data df2_data geometry 0 1.0 1.0 POLYGON ((2 2, 2 1, 1 1, 1 2, 2 2)) 1 2.0 1.0 POLYGON ((2 2, 2 3, 3 3, 3 2, 2 2)) 2 2.0 2.0 POLYGON ((4 4, 4 3, 3 3, 3 4, 4 4)) 3 1.0 NaN POLYGON ((2 0, 0 0, 0 2, 1 2, 1 1, 2 1, 2 0)) 4 2.0 NaN MULTIPOLYGON (((3 4, 3 3, 2 3, 2 4, 3 4)), ((4... 5 NaN 1.0 MULTIPOLYGON (((2 3, 2 2, 1 2, 1 3, 2 3)), ((3... 6 NaN 2.0 POLYGON ((3 5, 5 5, 5 3, 4 3, 4 4, 3 4, 3 5))
>>> df1.overlay(df2, how='intersection') df1_data df2_data geometry 0 1 1 POLYGON ((2 2, 2 1, 1 1, 1 2, 2 2)) 1 2 1 POLYGON ((2 2, 2 3, 3 3, 3 2, 2 2)) 2 2 2 POLYGON ((4 4, 4 3, 3 3, 3 4, 4 4))
>>> df1.overlay(df2, how='symmetric_difference') df1_data df2_data geometry 0 1.0 NaN POLYGON ((2 0, 0 0, 0 2, 1 2, 1 1, 2 1, 2 0)) 1 2.0 NaN MULTIPOLYGON (((3 4, 3 3, 2 3, 2 4, 3 4)), ((4... 2 NaN 1.0 MULTIPOLYGON (((2 3, 2 2, 1 2, 1 3, 2 3)), ((3... 3 NaN 2.0 POLYGON ((3 5, 5 5, 5 3, 4 3, 4 4, 3 4, 3 5))
>>> df1.overlay(df2, how='difference') geometry df1_data 0 POLYGON ((2 0, 0 0, 0 2, 1 2, 1 1, 2 1, 2 0)) 1 1 MULTIPOLYGON (((3 4, 3 3, 2 3, 2 4, 3 4)), ((4... 2
>>> df1.overlay(df2, how='identity') df1_data df2_data geometry 0 1 1.0 POLYGON ((2 2, 2 1, 1 1, 1 2, 2 2)) 1 2 1.0 POLYGON ((2 2, 2 3, 3 3, 3 2, 2 2)) 2 2 2.0 POLYGON ((4 4, 4 3, 3 3, 3 4, 4 4)) 3 1 NaN POLYGON ((2 0, 0 0, 0 2, 1 2, 1 1, 2 1, 2 0)) 4 2 NaN MULTIPOLYGON (((3 4, 3 3, 2 3, 2 4, 3 4)), ((4...
See Also¶
GeoDataFrame.sjoin : spatial join overlay : equivalent top-level function
Notes¶
Every operation in GeoPandas is planar, i.e. the potential third dimension is not taken into account.
- class vibespatial.GeoSeries(data=None, index=None, crs: Any | None = None, **kwargs)¶
A Series object designed to store shapely geometry objects.
Parameters¶
- dataarray-like, dict, scalar value
The geometries to store in the GeoSeries.
- indexarray-like or Index
The index for the GeoSeries.
- crsvalue (optional)
Coordinate Reference System of the geometry objects. Can be anything accepted by
pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.- kwargs
- Additional arguments passed to the Series constructor,
e.g.
name.
Examples¶
>>> from shapely.geometry import Point >>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)]) >>> s 0 POINT (1 1) 1 POINT (2 2) 2 POINT (3 3) dtype: geometry
>>> s = geopandas.GeoSeries( ... [Point(1, 1), Point(2, 2), Point(3, 3)], crs="EPSG:3857" ... ) >>> s.crs <Projected CRS: EPSG:3857> Name: WGS 84 / Pseudo-Mercator Axis Info [cartesian]: - X[east]: Easting (metre) - Y[north]: Northing (metre) Area of Use: - name: World - 85°S to 85°N - bounds: (-180.0, -85.06, 180.0, 85.06) Coordinate Operation: - name: Popular Visualisation Pseudo-Mercator - method: Popular Visualisation Pseudo Mercator Datum: World Geodetic System 1984 - Ellipsoid: WGS 84 - Prime Meridian: Greenwich
>>> s = geopandas.GeoSeries( ... [Point(1, 1), Point(2, 2), Point(3, 3)], index=["a", "b", "c"], crs=4326 ... ) >>> s a POINT (1 1) b POINT (2 2) c POINT (3 3) dtype: geometry
>>> s.crs <Geographic 2D CRS: EPSG:4326> Name: WGS 84 Axis Info [ellipsoidal]: - Lat[north]: Geodetic latitude (degree) - Lon[east]: Geodetic longitude (degree) Area of Use: - name: World. - bounds: (-180.0, -90.0, 180.0, 90.0) Datum: World Geodetic System 1984 ensemble - Ellipsoid: WGS 84 - Prime Meridian: Greenwich
See Also¶
GeoDataFrame pandas.Series
- property x: pandas.Series¶
Return the x location of point geometries in a GeoSeries.
Returns¶
pandas.Series
Examples¶
>>> from shapely.geometry import Point >>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)]) >>> s.x 0 1.0 1 2.0 2 3.0 dtype: float64
See Also¶
GeoSeries.y GeoSeries.z
- property y: pandas.Series¶
Return the y location of point geometries in a GeoSeries.
Returns¶
pandas.Series
Examples¶
>>> from shapely.geometry import Point >>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)]) >>> s.y 0 1.0 1 2.0 2 3.0 dtype: float64
See Also¶
GeoSeries.x GeoSeries.z GeoSeries.m
- property z: pandas.Series¶
Return the z location of point geometries in a GeoSeries.
Returns¶
pandas.Series
Examples¶
>>> from shapely.geometry import Point >>> s = geopandas.GeoSeries([Point(1, 1, 1), Point(2, 2, 2), Point(3, 3, 3)]) >>> s.z 0 1.0 1 2.0 2 3.0 dtype: float64
See Also¶
GeoSeries.x GeoSeries.y GeoSeries.m
- property m: pandas.Series¶
Return the m coordinate of point geometries in a GeoSeries.
Requires Shapely >= 2.1.
Added in version 1.1.0.
Returns¶
pandas.Series
Examples¶
>>> from shapely.geometry import Point >>> s = geopandas.GeoSeries.from_wkt( ... [ ... "POINT M (2 3 5)", ... "POINT M (1 2 3)", ... ] ... ) >>> s 0 POINT M (2 3 5) 1 POINT M (1 2 3) dtype: geometry
>>> s.m 0 5.0 1 3.0 dtype: float64
See Also¶
GeoSeries.x GeoSeries.y GeoSeries.z
- classmethod from_file(filename: os.PathLike | IO, **kwargs) GeoSeries¶
Alternate constructor to create a
GeoSeriesfrom a file.Can load a
GeoSeriesfrom a file from any format recognized by pyogrio. See http://pyogrio.readthedocs.io/ for details. From a file with attributes loads only geometry column. Note that to do that, GeoPandas first loads the whole GeoDataFrame.Parameters¶
- filenamestr
File path or file handle to read from. Depending on which kwargs are included, the content of filename may vary. See
pyogrio.read_dataframe()for usage details.- kwargskey-word arguments
These arguments are passed to
pyogrio.read_dataframe(), and can be used to access multi-layer data, data stored within archives (zip files), etc.
Examples¶
>>> import geodatasets >>> path = geodatasets.get_path('nybb') >>> s = geopandas.GeoSeries.from_file(path) >>> s 0 MULTIPOLYGON (((970217.022 145643.332, 970227.... 1 MULTIPOLYGON (((1029606.077 156073.814, 102957... 2 MULTIPOLYGON (((1021176.479 151374.797, 102100... 3 MULTIPOLYGON (((981219.056 188655.316, 980940.... 4 MULTIPOLYGON (((1012821.806 229228.265, 101278... Name: geometry, dtype: geometry
See Also¶
read_file : read file to GeoDataFrame
- classmethod from_wkb(data, index=None, crs: Any | None = None, on_invalid='raise', **kwargs) GeoSeries¶
Alternate constructor to create a
GeoSeriesfrom a list or array of WKB objects.Parameters¶
- dataarray-like or Series
Series, list or array of WKB objects
- indexarray-like or Index
The index for the GeoSeries.
- crsvalue, optional
Coordinate Reference System of the geometry objects. Can be anything accepted by
pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.- on_invalid: {“raise”, “warn”, “ignore”}, default “raise”
raise: an exception will be raised if a WKB input geometry is invalid.
warn: a warning will be raised and invalid WKB geometries will be returned as None.
ignore: invalid WKB geometries will be returned as None without a warning.
fix: an effort is made to fix invalid input geometries (e.g. close unclosed rings). If this is not possible, they are returned as
Nonewithout a warning. Requires GEOS >= 3.11 and shapely >= 2.1.
- kwargs
Additional arguments passed to the Series constructor, e.g.
name.
Returns¶
GeoSeries
See Also¶
GeoSeries.from_wkt
Examples¶
>>> wkbs = [ ... ( ... b"\x01\x01\x00\x00\x00\x00\x00\x00\x00" ... b"\x00\x00\xf0?\x00\x00\x00\x00\x00\x00\xf0?" ... ), ... ( ... b"\x01\x01\x00\x00\x00\x00\x00\x00\x00" ... b"\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00@" ... ), ... ( ... b"\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00" ... b"\x00\x08@\x00\x00\x00\x00\x00\x00\x08@" ... ), ... ] >>> s = geopandas.GeoSeries.from_wkb(wkbs) >>> s 0 POINT (1 1) 1 POINT (2 2) 2 POINT (3 3) dtype: geometry
- classmethod from_wkt(data, index=None, crs: Any | None = None, on_invalid='raise', **kwargs) GeoSeries¶
Alternate constructor to create a
GeoSeriesfrom a list or array of WKT objects.Parameters¶
- dataarray-like, Series
Series, list, or array of WKT objects
- indexarray-like or Index
The index for the GeoSeries.
- crsvalue, optional
Coordinate Reference System of the geometry objects. Can be anything accepted by
pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.- on_invalid{“raise”, “warn”, “ignore”}, default “raise”
raise: an exception will be raised if a WKT input geometry is invalid.
warn: a warning will be raised and invalid WKT geometries will be returned as
None.ignore: invalid WKT geometries will be returned as
Nonewithout a warning.fix: an effort is made to fix invalid input geometries (e.g. close unclosed rings). If this is not possible, they are returned as
Nonewithout a warning. Requires GEOS >= 3.11 and shapely >= 2.1.
- kwargs
Additional arguments passed to the Series constructor, e.g.
name.
Returns¶
GeoSeries
See Also¶
GeoSeries.from_wkb
Examples¶
>>> wkts = [ ... 'POINT (1 1)', ... 'POINT (2 2)', ... 'POINT (3 3)', ... ] >>> s = geopandas.GeoSeries.from_wkt(wkts) >>> s 0 POINT (1 1) 1 POINT (2 2) 2 POINT (3 3) dtype: geometry
- classmethod from_xy(x, y, z=None, index=None, crs=None, **kwargs) GeoSeries¶
Alternate constructor to create a
GeoSeriesof Point geometries from lists or arrays of x, y(, z) coordinates.In case of geographic coordinates, it is assumed that longitude is captured by
xcoordinates and latitude byy.Parameters¶
x, y, z : iterable index : array-like or Index, optional
The index for the GeoSeries. If not given and all coordinate inputs are Series with an equal index, that index is used.
- crsvalue, optional
Coordinate Reference System of the geometry objects. Can be anything accepted by
pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.- **kwargs
Additional arguments passed to the Series constructor, e.g.
name.
Returns¶
GeoSeries
See Also¶
GeoSeries.from_wkt points_from_xy
Examples¶
>>> x = [2.5, 5, -3.0] >>> y = [0.5, 1, 1.5] >>> s = geopandas.GeoSeries.from_xy(x, y, crs="EPSG:4326") >>> s 0 POINT (2.5 0.5) 1 POINT (5 1) 2 POINT (-3 1.5) dtype: geometry
- classmethod from_arrow(arr, **kwargs) GeoSeries¶
Construct a GeoSeries from an Arrow array object with a GeoArrow extension type.
See https://geoarrow.org/ for details on the GeoArrow specification.
This functions accepts any Arrow array object implementing the Arrow PyCapsule Protocol (i.e. having an
__arrow_c_array__method).Added in version 1.0.
Parameters¶
- arrpyarrow.Array, Arrow array
Any array object implementing the Arrow PyCapsule Protocol (i.e. has an
__arrow_c_array__or__arrow_c_stream__method). The type of the array should be one of the geoarrow geometry types.- **kwargs
Other parameters passed to the GeoSeries constructor.
Returns¶
GeoSeries
See Also¶
GeoSeries.to_arrow
Examples¶
>>> import geoarrow.pyarrow as ga >>> array = ga.as_geoarrow( ... [None, "POLYGON ((0 0, 1 1, 0 1, 0 0))", "LINESTRING (0 0, -1 1, 0 -1)"]) >>> geoseries = geopandas.GeoSeries.from_arrow(array) >>> geoseries 0 None 1 POLYGON ((0 0, 1 1, 0 1, 0 0)) 2 LINESTRING (0 0, -1 1, 0 -1) dtype: geometry
- to_file(filename: os.PathLike | IO, driver: str | None = None, index: bool | None = None, **kwargs)¶
Write the
GeoSeriesto a file.By default, an ESRI shapefile is written, but any OGR data source supported by Pyogrio or Fiona can be written.
Parameters¶
- filenamestring
File path or file handle to write to. The path may specify a GDAL VSI scheme.
- driverstring, default None
The OGR format driver used to write the vector file. If not specified, it attempts to infer it from the file extension. If no extension is specified, it saves ESRI Shapefile to a folder.
- indexbool, default None
If True, write index into one or more columns (for MultiIndex). Default None writes the index into one or more columns only if the index is named, is a MultiIndex, or has a non-integer data type. If False, no index is written.
Added in version 0.7: Previously the index was not written.
- modestring, default ‘w’
The write mode, ‘w’ to overwrite the existing file and ‘a’ to append. Not all drivers support appending. The drivers that support appending are listed in fiona.supported_drivers or https://github.com/Toblerity/Fiona/blob/master/fiona/drvsupport.py
- crspyproj.CRS, default None
If specified, the CRS is passed to Fiona to better control how the file is written. If None, GeoPandas will determine the crs based on crs df attribute. The value can be anything accepted by
pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string. The keyword is not supported for the “pyogrio” engine.- enginestr, “pyogrio” or “fiona”
The underlying library that is used to write the file. Currently, the supported options are “pyogrio” and “fiona”. Defaults to “pyogrio” if installed, otherwise tries “fiona”.
- **kwargs :
Keyword args to be passed to the engine, and can be used to write to multi-layer data, store data within archives (zip files), etc. In case of the “pyogrio” engine, the keyword arguments are passed to pyogrio.write_dataframe. In case of the “fiona” engine, the keyword arguments are passed to fiona.open`. For more information on possible keywords, type:
import pyogrio; help(pyogrio.write_dataframe).
See Also¶
GeoDataFrame.to_file : write GeoDataFrame to file read_file : read file to GeoDataFrame
Examples¶
>>> s.to_file('series.shp')
>>> s.to_file('series.gpkg', driver='GPKG', layer='name1')
>>> s.to_file('series.geojson', driver='GeoJSON')
- sort_index(*args, **kwargs)¶
Sort Series by index labels.
Returns a new Series sorted by label if inplace argument is
False, otherwise updates the original series and returns None.Parameters¶
- axis{0 or ‘index’}
Unused. Parameter needed for compatibility with DataFrame.
- levelint, optional
If not None, sort on values in specified index level(s).
- ascendingbool or list-like of bools, default True
Sort ascending vs. descending. When the index is a MultiIndex the sort direction can be controlled for each level individually.
- inplacebool, default False
If True, perform operation in-place.
- kind{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, default ‘quicksort’
Choice of sorting algorithm. See also
numpy.sort()for more information. ‘mergesort’ and ‘stable’ are the only stable algorithms. For DataFrames, this option is only applied when sorting on a single column or label.- na_position{‘first’, ‘last’}, default ‘last’
If ‘first’ puts NaNs at the beginning, ‘last’ puts NaNs at the end. Not implemented for MultiIndex.
- sort_remainingbool, default True
If True and sorting by level and index is multilevel, sort by other levels too (in order) after sorting by specified level.
- ignore_indexbool, default False
If True, the resulting axis will be labeled 0, 1, …, n - 1.
- keycallable, optional
If not None, apply the key function to the index values before sorting. This is similar to the key argument in the builtin
sorted()function, with the notable difference that this key function should be vectorized. It should expect anIndexand return anIndexof the same shape.
Returns¶
- Series or None
The original Series sorted by the labels or None if
inplace=True.
See Also¶
DataFrame.sort_index: Sort DataFrame by the index. DataFrame.sort_values: Sort DataFrame by the value. Series.sort_values : Sort Series by the value.
Examples¶
>>> s = pd.Series(["a", "b", "c", "d"], index=[3, 2, 1, 4]) >>> s.sort_index() 1 c 2 b 3 a 4 d dtype: str
Sort Descending
>>> s.sort_index(ascending=False) 4 d 3 a 2 b 1 c dtype: str
By default NaNs are put at the end, but use na_position to place them at the beginning
>>> s = pd.Series(["a", "b", "c", "d"], index=[3, 2, 1, np.nan]) >>> s.sort_index(na_position="first") NaN d 1.0 c 2.0 b 3.0 a dtype: str
Specify index level to sort
>>> arrays = [ ... np.array(["qux", "qux", "foo", "foo", "baz", "baz", "bar", "bar"]), ... np.array(["two", "one", "two", "one", "two", "one", "two", "one"]), ... ] >>> s = pd.Series([1, 2, 3, 4, 5, 6, 7, 8], index=arrays) >>> s.sort_index(level=1) bar one 8 baz one 6 foo one 4 qux one 2 bar two 7 baz two 5 foo two 3 qux two 1 dtype: int64
Does not sort by remaining levels when sorting by levels
>>> s.sort_index(level=1, sort_remaining=False) qux one 2 foo one 4 baz one 6 bar one 8 qux two 1 foo two 3 baz two 5 bar two 7 dtype: int64
Apply a key function before sorting
>>> s = pd.Series([1, 2, 3, 4], index=["A", "b", "C", "d"]) >>> s.sort_index(key=lambda x: x.str.lower()) A 1 b 2 C 3 d 4 dtype: int64
- take(*args, **kwargs)¶
Return the elements in the given positional indices along an axis.
This means that we are not indexing according to actual values in the index attribute of the object. We are indexing according to the actual position of the element in the object.
Parameters¶
- indicesarray-like
An array of ints indicating which positions to take.
- axis{0 or ‘index’, 1 or ‘columns’}, default 0
The axis on which to select elements.
0means that we are selecting rows,1means that we are selecting columns. For Series this parameter is unused and defaults to 0.- **kwargs
For compatibility with
numpy.take(). Has no effect on the output.
Returns¶
- same type as caller
An array-like containing the elements taken from the object.
See Also¶
DataFrame.loc : Select a subset of a DataFrame by labels. DataFrame.iloc : Select a subset of a DataFrame by positions. numpy.take : Take elements from an array along an axis.
Examples¶
>>> df = pd.DataFrame( ... [ ... ("falcon", "bird", 389.0), ... ("parrot", "bird", 24.0), ... ("lion", "mammal", 80.5), ... ("monkey", "mammal", np.nan), ... ], ... columns=["name", "class", "max_speed"], ... index=[0, 2, 3, 1], ... ) >>> df name class max_speed 0 falcon bird 389.0 2 parrot bird 24.0 3 lion mammal 80.5 1 monkey mammal NaN
Take elements at positions 0 and 3 along the axis 0 (default).
Note how the actual indices selected (0 and 1) do not correspond to our selected indices 0 and 3. That’s because we are selecting the 0th and 3rd rows, not rows whose indices equal 0 and 3.
>>> df.take([0, 3]) name class max_speed 0 falcon bird 389.0 1 monkey mammal NaN
Take elements at indices 1 and 2 along the axis 1 (column selection).
>>> df.take([1, 2], axis=1) class max_speed 0 bird 389.0 2 bird 24.0 3 mammal 80.5 1 mammal NaN
We may take elements using negative integers for positive indices, starting from the end of the object, just like with Python lists.
>>> df.take([-1, -2]) name class max_speed 1 monkey mammal NaN 3 lion mammal 80.5
- apply(func, convert_dtype: bool | None = None, args=(), **kwargs)¶
Invoke function on values of Series.
Can be ufunc (a NumPy function that applies to the entire Series) or a Python function that only works on single values.
Parameters¶
- funcfunction
Python function or NumPy ufunc to apply.
- argstuple
Positional arguments passed to func after the series value.
- by_rowFalse or “compat”, default “compat”
If
"compat"and func is a callable, func will be passed each element of the Series, likeSeries.map. If func is a list or dict of callables, will first try to translate each func into pandas methods. If that doesn’t work, will try call to apply again withby_row="compat"and if that fails, will call apply again withby_row=False(backward compatible). If False, the func will be passed the whole Series at once.by_rowhas no effect whenfuncis a string.Added in version 2.1.0.
- **kwargs
Additional keyword arguments passed to func.
Returns¶
- Series or DataFrame
If func returns a Series object the result will be a DataFrame.
See Also¶
Series.map: For element-wise operations. Series.agg: Only perform aggregating type operations. Series.transform: Only perform transforming type operations.
Notes¶
Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See gotchas.udf-mutation for more details.
Examples¶
Create a series with typical summer temperatures for each city.
>>> s = pd.Series([20, 21, 12], index=["London", "New York", "Helsinki"]) >>> s London 20 New York 21 Helsinki 12 dtype: int64
Square the values by defining a function and passing it as an argument to
apply().>>> def square(x): ... return x**2 >>> s.apply(square) London 400 New York 441 Helsinki 144 dtype: int64
Square the values by passing an anonymous function as an argument to
apply().>>> s.apply(lambda x: x**2) London 400 New York 441 Helsinki 144 dtype: int64
Define a custom function that needs additional positional arguments and pass these additional arguments using the
argskeyword.>>> def subtract_custom_value(x, custom_value): ... return x - custom_value
>>> s.apply(subtract_custom_value, args=(5,)) London 15 New York 16 Helsinki 7 dtype: int64
Define a custom function that takes keyword arguments and pass these arguments to
apply.>>> def add_custom_values(x, **kwargs): ... for month in kwargs: ... x += kwargs[month] ... return x
>>> s.apply(add_custom_values, june=30, july=20, august=25) London 95 New York 96 Helsinki 87 dtype: int64
Use a function from the Numpy library.
>>> s.apply(np.log) London 2.995732 New York 3.044522 Helsinki 2.484907 dtype: float64
- isna() pandas.Series¶
Detect missing values.
Historically, NA values in a GeoSeries could be represented by empty geometric objects, in addition to standard representations such as None and np.nan. This behaviour is changed in version 0.6.0, and now only actual missing values return True. To detect empty geometries, use
GeoSeries.is_emptyinstead.Returns¶
A boolean pandas Series of the same size as the GeoSeries, True where a value is NA.
Examples¶
>>> from shapely.geometry import Polygon >>> s = geopandas.GeoSeries( ... [Polygon([(0, 0), (1, 1), (0, 1)]), None, Polygon([])] ... ) >>> s 0 POLYGON ((0 0, 1 1, 0 1, 0 0)) 1 None 2 POLYGON EMPTY dtype: geometry
>>> s.isna() 0 False 1 True 2 False dtype: bool
See Also¶
GeoSeries.notna : inverse of isna GeoSeries.is_empty : detect empty geometries
- isnull() pandas.Series¶
Alias for isna method. See isna for more detail.
- notna() pandas.Series¶
Detect non-missing values.
Historically, NA values in a GeoSeries could be represented by empty geometric objects, in addition to standard representations such as None and np.nan. This behaviour is changed in version 0.6.0, and now only actual missing values return False. To detect empty geometries, use
~GeoSeries.is_emptyinstead.Returns¶
A boolean pandas Series of the same size as the GeoSeries, False where a value is NA.
Examples¶
>>> from shapely.geometry import Polygon >>> s = geopandas.GeoSeries( ... [Polygon([(0, 0), (1, 1), (0, 1)]), None, Polygon([])] ... ) >>> s 0 POLYGON ((0 0, 1 1, 0 1, 0 0)) 1 None 2 POLYGON EMPTY dtype: geometry
>>> s.notna() 0 True 1 False 2 True dtype: bool
See Also¶
GeoSeries.isna : inverse of notna GeoSeries.is_empty : detect empty geometries
- notnull() pandas.Series¶
Alias for notna method. See notna for more detail.
- fillna(value=None, inplace: bool = False, limit=None, **kwargs)¶
Fill NA values with geometry (or geometries).
Parameters¶
- valueshapely geometry or GeoSeries, default None
If None is passed, NA values will be filled with GEOMETRYCOLLECTION EMPTY. If a shapely geometry object is passed, it will be used to fill all missing values. If a
GeoSeriesorGeometryArrayare passed, missing values will be filled based on the corresponding index locations. If pd.NA or np.nan are passed, values will be filled withNone(not GEOMETRYCOLLECTION EMPTY).- limitint, default None
This is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.
Returns¶
GeoSeries
Examples¶
>>> from shapely.geometry import Polygon >>> s = geopandas.GeoSeries( ... [ ... Polygon([(0, 0), (1, 1), (0, 1)]), ... None, ... Polygon([(0, 0), (-1, 1), (0, -1)]), ... ] ... ) >>> s 0 POLYGON ((0 0, 1 1, 0 1, 0 0)) 1 None 2 POLYGON ((0 0, -1 1, 0 -1, 0 0)) dtype: geometry
Filled with an empty polygon.
>>> s.fillna() 0 POLYGON ((0 0, 1 1, 0 1, 0 0)) 1 GEOMETRYCOLLECTION EMPTY 2 POLYGON ((0 0, -1 1, 0 -1, 0 0)) dtype: geometry
Filled with a specific polygon.
>>> s.fillna(Polygon([(0, 1), (2, 1), (1, 2)])) 0 POLYGON ((0 0, 1 1, 0 1, 0 0)) 1 POLYGON ((0 1, 2 1, 1 2, 0 1)) 2 POLYGON ((0 0, -1 1, 0 -1, 0 0)) dtype: geometry
Filled with another GeoSeries.
>>> from shapely.geometry import Point >>> s_fill = geopandas.GeoSeries( ... [ ... Point(0, 0), ... Point(1, 1), ... Point(2, 2), ... ] ... ) >>> s.fillna(s_fill) 0 POLYGON ((0 0, 1 1, 0 1, 0 0)) 1 POINT (1 1) 2 POLYGON ((0 0, -1 1, 0 -1, 0 0)) dtype: geometry
See Also¶
GeoSeries.isna : detect missing values
- plot(*args, **kwargs)¶
- explore(*args, **kwargs)¶
Explore with an interactive map based on folium/leaflet.js.
- explode(ignore_index=False, index_parts=False) GeoSeries¶
Explode multi-part geometries into multiple single geometries.
Single rows can become multiple rows. This is analogous to PostGIS’s ST_Dump(). The ‘path’ index is the second level of the returned MultiIndex
Parameters¶
- ignore_indexbool, default False
If True, the resulting index will be labelled 0, 1, …, n - 1, ignoring index_parts.
- index_partsboolean, default False
If True, the resulting index will be a multi-index (original index with an additional level indicating the multiple geometries: a new zero-based index for each single part geometry per multi-part geometry).
Returns¶
A GeoSeries with a MultiIndex. The levels of the MultiIndex are the original index and a zero-based integer index that counts the number of single geometries within a multi-part geometry.
Examples¶
>>> from shapely.geometry import MultiPoint >>> s = geopandas.GeoSeries( ... [MultiPoint([(0, 0), (1, 1)]), MultiPoint([(2, 2), (3, 3), (4, 4)])] ... ) >>> s 0 MULTIPOINT ((0 0), (1 1)) 1 MULTIPOINT ((2 2), (3 3), (4 4)) dtype: geometry
>>> s.explode(index_parts=True) 0 0 POINT (0 0) 1 POINT (1 1) 1 0 POINT (2 2) 1 POINT (3 3) 2 POINT (4 4) dtype: geometry
See Also¶
GeoDataFrame.explode
- set_crs(crs: Any | None = None, epsg: int | None = None, inplace: bool = False, allow_override: bool = False)¶
Set the Coordinate Reference System (CRS) of a
GeoSeries.Pass
Noneto remove CRS from theGeoSeries.Notes¶
The underlying geometries are not transformed to this CRS. To transform the geometries to a new CRS, use the
to_crsmethod.Parameters¶
- crspyproj.CRS | None, optional
The value can be anything accepted by
pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.- epsgint, optional if crs is specified
EPSG code specifying the projection.
- inplacebool, default False
If True, the CRS of the GeoSeries will be changed in place (while still returning the result) instead of making a copy of the GeoSeries.
- allow_overridebool, default False
If the the GeoSeries already has a CRS, allow to replace the existing CRS, even when both are not equal.
Returns¶
GeoSeries
Examples¶
>>> from shapely.geometry import Point >>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)]) >>> s 0 POINT (1 1) 1 POINT (2 2) 2 POINT (3 3) dtype: geometry
Setting CRS to a GeoSeries without one:
>>> s.crs is None True
>>> s = s.set_crs('epsg:3857') >>> s.crs <Projected CRS: EPSG:3857> Name: WGS 84 / Pseudo-Mercator Axis Info [cartesian]: - X[east]: Easting (metre) - Y[north]: Northing (metre) Area of Use: - name: World - 85°S to 85°N - bounds: (-180.0, -85.06, 180.0, 85.06) Coordinate Operation: - name: Popular Visualisation Pseudo-Mercator - method: Popular Visualisation Pseudo Mercator Datum: World Geodetic System 1984 - Ellipsoid: WGS 84 - Prime Meridian: Greenwich
Overriding existing CRS:
>>> s = s.set_crs(4326, allow_override=True)
Without
allow_override=True,set_crsreturns an error if you try to override CRS.See Also¶
GeoSeries.to_crs : re-project to another CRS
- to_crs(crs: Any | None = None, epsg: int | None = None) GeoSeries¶
Return a
GeoSerieswith all geometries transformed to a new coordinate reference system.Transform all geometries in a GeoSeries to a different coordinate reference system. The
crsattribute on the current GeoSeries must be set. Eithercrsorepsgmay be specified for output.This method will transform all points in all objects. It has no notion of projecting entire geometries. All segments joining points are assumed to be lines in the current projection, not geodesics. Objects crossing the dateline (or other projection boundary) will have undesirable behavior.
Parameters¶
- crspyproj.CRS, optional if epsg is specified
The value can be anything accepted by
pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.- epsgint, optional if crs is specified
EPSG code specifying output projection.
Returns¶
GeoSeries
Examples¶
>>> from shapely.geometry import Point >>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)], crs=4326) >>> s 0 POINT (1 1) 1 POINT (2 2) 2 POINT (3 3) dtype: geometry >>> s.crs <Geographic 2D CRS: EPSG:4326> Name: WGS 84 Axis Info [ellipsoidal]: - Lat[north]: Geodetic latitude (degree) - Lon[east]: Geodetic longitude (degree) Area of Use: - name: World - bounds: (-180.0, -90.0, 180.0, 90.0) Datum: World Geodetic System 1984 - Ellipsoid: WGS 84 - Prime Meridian: Greenwich
>>> s = s.to_crs(3857) >>> s 0 POINT (111319.491 111325.143) 1 POINT (222638.982 222684.209) 2 POINT (333958.472 334111.171) dtype: geometry >>> s.crs <Projected CRS: EPSG:3857> Name: WGS 84 / Pseudo-Mercator Axis Info [cartesian]: - X[east]: Easting (metre) - Y[north]: Northing (metre) Area of Use: - name: World - 85°S to 85°N - bounds: (-180.0, -85.06, 180.0, 85.06) Coordinate Operation: - name: Popular Visualisation Pseudo-Mercator - method: Popular Visualisation Pseudo Mercator Datum: World Geodetic System 1984 - Ellipsoid: WGS 84 - Prime Meridian: Greenwich
See Also¶
GeoSeries.set_crs : assign CRS
- estimate_utm_crs(datum_name: str = 'WGS 84')¶
Return the estimated UTM CRS based on the bounds of the dataset.
Added in version 0.9.
Parameters¶
- datum_namestr, optional
The name of the datum to use in the query. Default is WGS 84.
Returns¶
pyproj.CRS
Examples¶
>>> import geodatasets >>> df = geopandas.read_file( ... geodatasets.get_path("geoda.chicago_health") ... ) >>> df.geometry.estimate_utm_crs() <Derived Projected CRS: EPSG:32616> Name: WGS 84 / UTM zone 16N Axis Info [cartesian]: - E[east]: Easting (metre) - N[north]: Northing (metre) Area of Use: - name: Between 90°W and 84°W, northern hemisphere between equator and 84°N, ... - bounds: (-90.0, 0.0, -84.0, 84.0) Coordinate Operation: - name: UTM zone 16N - method: Transverse Mercator Datum: World Geodetic System 1984 ensemble - Ellipsoid: WGS 84 - Prime Meridian: Greenwich
- to_json(show_bbox: bool = True, drop_id: bool = False, to_wgs84: bool = False, **kwargs) str¶
Return a GeoJSON string representation of the GeoSeries.
Parameters¶
- show_bboxbool, optional, default: True
Include bbox (bounds) in the geojson
- drop_idbool, default: False
Whether to retain the index of the GeoSeries as the id property in the generated GeoJSON. Default is False, but may want True if the index is just arbitrary row numbers.
- to_wgs84: bool, optional, default: False
If the CRS is set on the active geometry column it is exported as WGS84 (EPSG:4326) to meet the 2016 GeoJSON specification. Set to True to force re-projection and set to False to ignore CRS. False by default.
kwargs that will be passed to json.dumps().
Returns¶
JSON string
Examples¶
>>> from shapely.geometry import Point >>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)]) >>> s 0 POINT (1 1) 1 POINT (2 2) 2 POINT (3 3) dtype: geometry
>>> s.to_json() '{"type": "FeatureCollection", "features": [{"id": "0", "type": "Feature", "properties": {}, "geometry": {"type": "Point", "coordinates": [1.0, 1.0]}, "bbox": [1.0, 1.0, 1.0, 1.0]}, {"id": "1", "type": "Feature", "properties": {}, "geometry": {"type": "Point", "coordinates": [2.0, 2.0]}, "bbox": [2.0, 2.0, 2.0, 2.0]}, {"id": "2", "type": "Feature", "properties": {}, "geometry": {"type": "Point", "coordinates": [3.0, 3.0]}, "bbox": [3.0, 3.0, 3.0, 3.0]}], "bbox": [1.0, 1.0, 3.0, 3.0]}'
See Also¶
GeoSeries.to_file : write GeoSeries to file
- to_wkb(hex: bool = False, **kwargs) pandas.Series¶
Convert GeoSeries geometries to WKB.
Parameters¶
- hexbool
If true, export the WKB as a hexadecimal string. The default is to return a binary bytes object.
- kwargs
Additional keyword args will be passed to
shapely.to_wkb().
Returns¶
- Series
WKB representations of the geometries
See Also¶
GeoSeries.to_wkt
Examples¶
>>> from shapely.geometry import Point, Polygon >>> s = geopandas.GeoSeries( ... [ ... Point(0, 0), ... Polygon(), ... Polygon([(0, 0), (1, 1), (1, 0)]), ... None, ... ] ... )
>>> s.to_wkb() 0 b'\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00... 1 b'\x01\x03\x00\x00\x00\x00\x00\x00\x00' 2 b'\x01\x03\x00\x00\x00\x01\x00\x00\x00\x04\x00... 3 None dtype: object
>>> s.to_wkb(hex=True) 0 010100000000000000000000000000000000000000 1 010300000000000000 2 0103000000010000000400000000000000000000000000... 3 NaN dtype: str
- to_wkt(**kwargs) pandas.Series¶
Convert GeoSeries geometries to WKT.
Parameters¶
- kwargs
Keyword args will be passed to
shapely.to_wkt().
Returns¶
- Series
WKT representations of the geometries
Examples¶
>>> from shapely.geometry import Point >>> s = geopandas.GeoSeries([Point(1, 1), Point(2, 2), Point(3, 3)]) >>> s 0 POINT (1 1) 1 POINT (2 2) 2 POINT (3 3) dtype: geometry
>>> s.to_wkt() 0 POINT (1 1) 1 POINT (2 2) 2 POINT (3 3) dtype: str
See Also¶
GeoSeries.to_wkb
- to_arrow(geometry_encoding='WKB', interleaved=True, include_z=None)¶
Encode a GeoSeries to GeoArrow format.
See https://geoarrow.org/ for details on the GeoArrow specification.
This functions returns a generic Arrow array object implementing the Arrow PyCapsule Protocol (i.e. having an
__arrow_c_array__method). This object can then be consumed by your Arrow implementation of choice that supports this protocol.Added in version 1.0.
Parameters¶
- geometry_encoding{‘WKB’, ‘geoarrow’ }, default ‘WKB’
The GeoArrow encoding to use for the data conversion.
- interleavedbool, default True
Only relevant for ‘geoarrow’ encoding. If True, the geometries’ coordinates are interleaved in a single fixed size list array. If False, the coordinates are stored as separate arrays in a struct type.
- include_zbool, default None
Only relevant for ‘geoarrow’ encoding (for WKB, the dimensionality of the individual geometries is preserved). If False, return 2D geometries. If True, include the third dimension in the output (if a geometry has no third dimension, the z-coordinates will be NaN). By default, will infer the dimensionality from the input geometries. Note that this inference can be unreliable with empty geometries (for a guaranteed result, it is recommended to specify the keyword).
Returns¶
- GeoArrowArray
A generic Arrow array object with geometry data encoded to GeoArrow.
Examples¶
>>> from shapely.geometry import Point >>> gser = geopandas.GeoSeries([Point(1, 2), Point(2, 1)]) >>> gser 0 POINT (1 2) 1 POINT (2 1) dtype: geometry
>>> arrow_array = gser.to_arrow() >>> arrow_array <geopandas.io._geoarrow.GeoArrowArray object at ...>
The returned array object needs to be consumed by a library implementing the Arrow PyCapsule Protocol. For example, wrapping the data as a pyarrow.Array (requires pyarrow >= 14.0):
>>> import pyarrow as pa >>> array = pa.array(arrow_array) >>> array GeometryExtensionArray:WkbType(geoarrow.wkb)[2] <POINT (1 2)> <POINT (2 1)>
- clip(mask, keep_geom_type: bool = False, sort=False) GeoSeries¶
Clip points, lines, or polygon geometries to the mask extent.
Both layers must be in the same Coordinate Reference System (CRS). The GeoSeries will be clipped to the full extent of the mask object.
If there are multiple polygons in mask, data from the GeoSeries will be clipped to the total boundary of all polygons in mask.
Parameters¶
- maskGeoDataFrame, GeoSeries, (Multi)Polygon, list-like
Polygon vector layer used to clip gdf. The mask’s geometry is dissolved into one geometric feature and intersected with GeoSeries. If the mask is list-like with four elements
(minx, miny, maxx, maxy),clipwill use a faster rectangle clipping (clip_by_rect()), possibly leading to slightly different results.- keep_geom_typeboolean, default False
If True, return only geometries of original type in case of intersection resulting in multiple geometry types or GeometryCollections. If False, return all resulting geometries (potentially mixed-types).
- sortboolean, default False
If True, the order of rows in the clipped GeoSeries will be preserved at small performance cost. If False the order of rows in the clipped GeoSeries will be random.
Returns¶
- GeoSeries
Vector data (points, lines, polygons) from gdf clipped to polygon boundary from mask.
See Also¶
clip : top-level function for clip
Examples¶
Clip points (grocery stores) with polygons (the Near West Side community):
>>> import geodatasets >>> chicago = geopandas.read_file( ... geodatasets.get_path("geoda.chicago_health") ... ) >>> near_west_side = chicago[chicago["community"] == "NEAR WEST SIDE"] >>> groceries = geopandas.read_file( ... geodatasets.get_path("geoda.groceries") ... ).to_crs(chicago.crs) >>> groceries.shape (148, 8)
>>> nws_groceries = groceries.geometry.clip(near_west_side) >>> nws_groceries.shape (7,)
- vibespatial.list_layers(filename) pandas.DataFrame¶
List layers available in a file.
Provides an overview of layers available in a file or URL together with their geometry types. When supported by the data source, this includes both spatial and non-spatial layers. Non-spatial layers are indicated by the
"geometry_type"column beingNone. GeoPandas will not read such layers but they can be read into a pd.DataFrame usingpyogrio.read_dataframe().Parameters¶
- filenamestr, path object or file-like object
Either the absolute or relative path to the file or URL to be opened, or any object with a read() method (such as an open file or StringIO)
Returns¶
- pandas.DataFrame
A DataFrame with columns “name” and “geometry_type” and one row per layer.
- vibespatial.points_from_xy(x: numpy.typing.ArrayLike, y: numpy.typing.ArrayLike, z: numpy.typing.ArrayLike = None, crs: Any | None = None) GeometryArray¶
Generate GeometryArray of shapely Point geometries from x, y(, z) coordinates.
In case of geographic coordinates, it is assumed that longitude is captured by
xcoordinates and latitude byy.Parameters¶
x, y, z : iterable crs : value, optional
Coordinate Reference System of the geometry objects. Can be anything accepted by
pyproj.CRS.from_user_input(), such as an authority string (eg “EPSG:4326”) or a WKT string.Examples¶
>>> import pandas as pd >>> df = pd.DataFrame({'x': [0, 1, 2], 'y': [0, 1, 2], 'z': [0, 1, 2]}) >>> df x y z 0 0 0 0 1 1 1 1 2 2 2 2 >>> geometry = geopandas.points_from_xy(x=[1, 0], y=[0, 1]) >>> geometry = geopandas.points_from_xy(df['x'], df['y'], df['z']) >>> gdf = geopandas.GeoDataFrame( ... df, geometry=geopandas.points_from_xy(df['x'], df['y']))
Having geographic coordinates:
>>> df = pd.DataFrame({'longitude': [-140, 0, 123], 'latitude': [-65, 1, 48]}) >>> df longitude latitude 0 -140 -65 1 0 1 2 123 48 >>> geometry = geopandas.points_from_xy(df.longitude, df.latitude, crs="EPSG:4326")
Returns¶
output : GeometryArray
- vibespatial.read_feather(path, columns=None, to_pandas_kwargs=None, **kwargs)¶
Load a Feather object from the file path, returning a GeoDataFrame.
You can read a subset of columns in the file using the
columnsparameter. However, the structure of the returned GeoDataFrame will depend on which columns you read:if no geometry columns are read, this will raise a
ValueError- you should use the pandas read_feather method instead.if the primary geometry column saved to this file is not included in columns, the first available geometry column will be set as the geometry column of the returned GeoDataFrame.
Supports versions 0.1.0, 0.4.0, 1.0.0 and 1.1.0 of the GeoParquet specification at: https://github.com/opengeospatial/geoparquet
If ‘crs’ key is not present in the Feather metadata associated with the Parquet object, it will default to “OGC:CRS84” according to the specification.
Requires ‘pyarrow’ >= 0.17.
Added in version 0.8.
Parameters¶
- pathstr, path object or file-like object
String, path object (implementing os.PathLike[str]) or file-like object implementing a binary read() function.
- columnslist-like of strings, default=None
If not None, only these columns will be read from the file. If the primary geometry column is not included, the first secondary geometry read from the file will be set as the geometry column of the returned GeoDataFrame. If no geometry columns are present, a
ValueErrorwill be raised.- to_pandas_kwargsdict, optional
Arguments passed to the pa.Table.to_pandas method for non-geometry columns. This can be used to control the behavior of the conversion of the non-geometry columns to a pandas DataFrame. For example, you can use this to control the dtype conversion of the columns. By default, the to_pandas method is called with no additional arguments.
- **kwargs
Any additional kwargs passed to pyarrow.feather.read_table().
Returns¶
GeoDataFrame
Examples¶
>>> df = geopandas.read_feather("data.feather")
Specifying columns to read:
>>> df = geopandas.read_feather( ... "data.feather", ... columns=["geometry", "pop_est"] ... )
See the read_parquet docs for examples of reading and writing to/from bytes objects.
- vibespatial.read_file(filename, bbox=None, mask=None, columns=None, rows=None, engine=None, **kwargs)¶
Read a vector file into a GeoDataFrame.
Supports Shapefile, GeoPackage, GeoJSON, and any format readable by pyogrio/fiona. For GeoJSON and Shapefile inputs the reader attempts a GPU-accelerated owned path first; other formats fall back to pyogrio.
Aliased as
vibespatial.read_file().Parameters¶
- filenamestr or Path
Path to the vector file.
- bboxtuple of (minx, miny, maxx, maxy), optional
Spatial filter bounding box.
- maskGeometry or GeoDataFrame, optional
Spatial filter mask geometry.
- columnslist of str, optional
Subset of columns to read.
- rowsint or slice, optional
Subset of rows to read.
- enginestr, optional
Force a specific I/O engine (
"pyogrio"or"fiona").- **kwargs
Passed through to the underlying engine.
Returns¶
GeoDataFrame
- vibespatial.read_parquet(path, *, columns=None, storage_options=None, bbox=None, to_pandas_kwargs=None, **kwargs)¶
Read a GeoParquet file into a GeoDataFrame.
When PyArrow is available the reader plans row-group selection from spatial metadata, decodes WKB geometry on GPU when possible, and produces device-resident
OwnedGeometryArraywithout host round-trips.Aliased as
vibespatial.read_parquet().Parameters¶
- pathstr or Path
Path to the GeoParquet file.
- columnslist of str, optional
Subset of columns to read.
- storage_optionsdict, optional
Storage options for fsspec-compatible filesystems.
- bboxtuple of (minx, miny, maxx, maxy), optional
Spatial filter bounding box for row-group pruning.
- to_pandas_kwargsdict, optional
Extra keyword arguments passed to
pyarrow.Table.to_pandas().- **kwargs
Passed through to the underlying Parquet reader.
Returns¶
GeoDataFrame
- vibespatial.options¶
- vibespatial.sjoin_nearest(left_df: vibespatial.api.GeoDataFrame, right_df: vibespatial.api.GeoDataFrame, how: str = 'inner', max_distance: float | None = None, lsuffix: str = 'left', rsuffix: str = 'right', distance_col: str | None = None, exclusive: bool = False) vibespatial.api.GeoDataFrame¶
Spatial join of two GeoDataFrames based on the distance between their geometries.
Results will include multiple output records for a single input record where there are multiple equidistant nearest or intersected neighbors.
Distance is calculated in CRS units and can be returned using the distance_col parameter.
See the User Guide page https://geopandas.readthedocs.io/en/latest/docs/user_guide/mergingdata.html for more details.
Parameters¶
left_df, right_df : GeoDataFrames how : string, default ‘inner’
The type of join:
‘left’: use keys from left_df; retain only left_df geometry column
‘right’: use keys from right_df; retain only right_df geometry column
‘inner’: use intersection of keys from both dfs; retain only left_df geometry column
- max_distancefloat, default None
Maximum distance within which to query for nearest geometry. Must be greater than 0. The max_distance used to search for nearest items in the tree may have a significant impact on performance by reducing the number of input geometries that are evaluated for nearest items in the tree.
- lsuffixstring, default ‘left’
Suffix to apply to overlapping column names (left GeoDataFrame).
- rsuffixstring, default ‘right’
Suffix to apply to overlapping column names (right GeoDataFrame).
- distance_colstring, default None
If set, save the distances computed between matching geometries under a column of this name in the joined GeoDataFrame.
- exclusivebool, default False
If True, the nearest geometries that are equal to the input geometry will not be returned, default False.
Examples¶
>>> import geodatasets >>> groceries = geopandas.read_file( ... geodatasets.get_path("geoda.groceries") ... ) >>> chicago = geopandas.read_file( ... geodatasets.get_path("geoda.chicago_health") ... ).to_crs(groceries.crs)
>>> chicago.head() ComAreaID ... geometry 0 35 ... POLYGON ((-87.60914 41.84469, -87.60915 41.844... 1 36 ... POLYGON ((-87.59215 41.81693, -87.59231 41.816... 2 37 ... POLYGON ((-87.62880 41.80189, -87.62879 41.801... 3 38 ... POLYGON ((-87.60671 41.81681, -87.60670 41.816... 4 39 ... POLYGON ((-87.59215 41.81693, -87.59215 41.816... [5 rows x 87 columns]
>>> groceries.head() OBJECTID Ycoord ... Category geometry 0 16 41.973266 ... NaN MULTIPOINT ((-87.65661 41.97321)) 1 18 41.696367 ... NaN MULTIPOINT ((-87.68136 41.69713)) 2 22 41.868634 ... NaN MULTIPOINT ((-87.63918 41.86847)) 3 23 41.877590 ... new MULTIPOINT ((-87.65495 41.87783)) 4 27 41.737696 ... NaN MULTIPOINT ((-87.62715 41.73623)) [5 rows x 8 columns]
>>> groceries_w_communities = geopandas.sjoin_nearest(groceries, chicago) >>> groceries_w_communities[["Chain", "community", "geometry"]].head(2) Chain community geometry 0 VIET HOA PLAZA UPTOWN MULTIPOINT ((1168268.672 1933554.35)) 1 COUNTY FAIR FOODS MORGAN PARK MULTIPOINT ((1162302.618 1832900.224))
To include the distances:
>>> groceries_w_communities = geopandas.sjoin_nearest(groceries, chicago, distance_col="distances") >>> groceries_w_communities[["Chain", "community", "distances"]].head(2) Chain community distances 0 VIET HOA PLAZA UPTOWN 0.0 1 COUNTY FAIR FOODS MORGAN PARK 0.0
In the following example, we get multiple groceries for Uptown because all results are equidistant (in this case zero because they intersect). In fact, we get 4 results in total:
>>> chicago_w_groceries = geopandas.sjoin_nearest(groceries, chicago, distance_col="distances", how="right") >>> uptown_results = chicago_w_groceries[chicago_w_groceries["community"] == "UPTOWN"] >>> uptown_results[["Chain", "community"]] Chain community 30 VIET HOA PLAZA UPTOWN 30 JEWEL OSCO UPTOWN 30 TARGET UPTOWN 30 Mariano's UPTOWN
See Also¶
sjoin : binary predicate joins GeoDataFrame.sjoin_nearest : equivalent method
Notes¶
Since this join relies on distances, results will be inaccurate if your geometries are in a geographic CRS.
Every operation in GeoPandas is planar, i.e. the potential third dimension is not taken into account.
- class vibespatial.RectClipBenchmark¶
- dataset: str¶
- rows: int¶
- candidate_rows: int¶
- fast_rows: int¶
- fallback_rows: int¶
- owned_elapsed_seconds: float¶
- shapely_elapsed_seconds: float¶
- property speedup_vs_shapely: float¶
- class vibespatial.RectClipResult(*, geometries: numpy.ndarray | None = None, geometries_factory: object | None = None, row_count: int, candidate_rows: numpy.ndarray, fast_rows: numpy.ndarray, fallback_rows: numpy.ndarray, runtime_selection: vibespatial.runtime.RuntimeSelection, precision_plan: vibespatial.runtime.precision.PrecisionPlan, robustness_plan: vibespatial.runtime.robustness.RobustnessPlan, owned_result: vibespatial.geometry.owned.OwnedGeometryArray | None = None)¶
Result of a rectangle clip operation.
geometriesis lazily materialized fromowned_resultwhen accessed for the first time on the GPU point path, avoiding D->H->Shapely overhead unless a caller actually needs Shapely objects.- row_count¶
- candidate_rows¶
- fast_rows¶
- fallback_rows¶
- runtime_selection¶
- precision_plan¶
- robustness_plan¶
- owned_result = None¶
- property geometries: numpy.ndarray¶
- vibespatial.benchmark_clip_by_rect(values: collections.abc.Sequence[object | None] | numpy.ndarray | vibespatial.geometry.owned.OwnedGeometryArray, xmin: float, ymin: float, xmax: float, ymax: float, *, dataset: str) RectClipBenchmark¶
- vibespatial.clip_by_rect_owned(values: collections.abc.Sequence[object | None] | numpy.ndarray | vibespatial.geometry.owned.OwnedGeometryArray, xmin: float, ymin: float, xmax: float, ymax: float, *, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.AUTO, precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO) RectClipResult¶
- vibespatial.evaluate_geopandas_clip_by_rect(values: numpy.ndarray, xmin: float, ymin: float, xmax: float, ymax: float, *, prebuilt_owned: vibespatial.geometry.owned.OwnedGeometryArray | None = None) tuple[numpy.ndarray | None, vibespatial.runtime.ExecutionMode]¶
- class vibespatial.GPURepairResult¶
Result of GPU make_valid repair.
- repaired_geometries: numpy.ndarray¶
- repaired_count: int¶
- gpu_phases_used: tuple[str, Ellipsis]¶
- vibespatial.gpu_repair_invalid_polygons(owned: vibespatial.geometry.owned.OwnedGeometryArray, invalid_rows: numpy.ndarray, geometries: numpy.ndarray, *, method: str = 'linework', keep_collapsed: bool = True) GPURepairResult | None¶
GPU-resident batch repair of invalid polygon geometries (Phase 16).
Implements the full make_valid pipeline on GPU with batch processing: 1. Collect all invalid polygon coordinates into one contiguous batch 2. Phase B: Close rings, remove duplicates, fix orientation (batched) 3. Phase A+C: Detect and split self-intersections (batched) 4. Phase D: Re-polygonize via overlay half-edge/face-walk pipeline (batched) 5. Map output polygons back to global row indices
No per-polygon Python loop. No shapely.polygonize or shapely.make_valid fallback. All repair is GPU-resident.
Returns None if GPU repair is not applicable (no GPU, no polygon families, or CuPy not available).
Parameters¶
owned : OwnedGeometryArray with device_state invalid_rows : indices of invalid rows to repair geometries : shapely geometry array for all rows method : repair method (only “linework” supported on GPU) keep_collapsed : whether to keep collapsed geometries
- class vibespatial.MakeValidBenchmark¶
- dataset: str¶
- rows: int¶
- repaired_rows: int¶
- compact_elapsed_seconds: float¶
- baseline_elapsed_seconds: float¶
- property speedup_vs_baseline: float¶
- class vibespatial.MakeValidPlan¶
- method: str¶
- keep_collapsed: bool¶
- stages: tuple[MakeValidStage, Ellipsis]¶
- fusion_steps: tuple[vibespatial.runtime.fusion.PipelineStep, Ellipsis]¶
- reason: str¶
- class vibespatial.MakeValidPrimitive¶
Enum where members are also (and must be) strings
- VALIDITY_MASK = 'validity_mask'¶
- COMPACT_INVALID = 'compact_invalid'¶
- SEGMENTIZE_INVALID = 'segmentize_invalid'¶
- POLYGONIZE_REPAIR = 'polygonize_repair'¶
- SCATTER_REPAIRED = 'scatter_repaired'¶
- EMIT_GEOMETRY = 'emit_geometry'¶
- class vibespatial.MakeValidResult¶
- geometries: numpy.ndarray¶
- row_count: int¶
- valid_rows: numpy.ndarray¶
- repaired_rows: numpy.ndarray¶
- null_rows: numpy.ndarray¶
- method: str¶
- keep_collapsed: bool¶
- owned: object | None = None¶
- selected: vibespatial.runtime.ExecutionMode¶
- class vibespatial.MakeValidStage¶
- name: str¶
- primitive: MakeValidPrimitive¶
- purpose: str¶
- inputs: tuple[str, Ellipsis]¶
- outputs: tuple[str, Ellipsis]¶
- cccl_mapping: tuple[str, Ellipsis]¶
- disposition: vibespatial.runtime.fusion.IntermediateDisposition¶
- geometry_producing: bool = False¶
- vibespatial.benchmark_make_valid(values, *, method: str = 'linework', keep_collapsed: bool = True, dataset: str = 'make-valid')¶
- vibespatial.evaluate_geopandas_make_valid(values, *, method: str = 'linework', keep_collapsed: bool = True, prebuilt_owned=None) MakeValidResult¶
Run make_valid and return the full MakeValidResult.
Returns MakeValidResult so callers can access .owned for device-resident fast paths and .selected for dispatch event accuracy.
- vibespatial.fusion_plan_for_make_valid(*, method: str = 'linework', keep_collapsed: bool = True)¶
- vibespatial.make_valid_owned(values=None, *, method: str = 'linework', keep_collapsed: bool = True, owned=None, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.AUTO) MakeValidResult¶
Validate and repair geometries using compact-invalid-row pattern (ADR-0019).
Parameters¶
- valuesarray-like of shapely geometries, optional
When owned is provided, values may be None – Shapely objects will only be materialized if GPU validity checks find invalid rows that require repair (lazy materialization per ADR-0005).
method : repair method (“linework” or “structure”) keep_collapsed : whether to keep collapsed geometries owned : optional pre-built OwnedGeometryArray (avoids shapely->owned conversion
when data is already device-resident, eliminating D->H transfer for the validity check per ADR-0005)
dispatch_mode : requested execution mode (AUTO/GPU/CPU)
- vibespatial.plan_make_valid_pipeline(*, method: str = 'linework', keep_collapsed: bool = True) MakeValidPlan¶
- class vibespatial.BufferKernelResult(*, geometries: numpy.ndarray | None = None, row_count: int, fast_rows: numpy.ndarray, fallback_rows: numpy.ndarray, owned_result: vibespatial.geometry.owned.OwnedGeometryArray | None = None)¶
Result of a buffer kernel invocation.
When
owned_resultis set,geometriesis materialized lazily on first access so that callers that stay on the device-resident path never pay for a D->H transfer.- row_count¶
- fast_rows¶
- fallback_rows¶
- owned_result = None¶
- property geometries: numpy.ndarray¶
- class vibespatial.OffsetCurveKernelResult¶
- geometries: numpy.ndarray¶
- row_count: int¶
- fast_rows: numpy.ndarray¶
- fallback_rows: numpy.ndarray¶
- class vibespatial.StrokeBenchmark¶
- dataset: str¶
- rows: int¶
- fast_rows: int¶
- fallback_rows: int¶
- owned_elapsed_seconds: float¶
- shapely_elapsed_seconds: float¶
- property speedup_vs_shapely: float¶
- class vibespatial.StrokeKernelPlan¶
- operation: StrokeOperation¶
- stages: tuple[StrokeKernelStage, Ellipsis]¶
- fusion_steps: tuple[vibespatial.runtime.fusion.PipelineStep, Ellipsis]¶
- reason: str¶
- class vibespatial.StrokeKernelStage¶
- name: str¶
- primitive: StrokePrimitive¶
- purpose: str¶
- inputs: tuple[str, Ellipsis]¶
- outputs: tuple[str, Ellipsis]¶
- cccl_mapping: tuple[str, Ellipsis]¶
- disposition: vibespatial.runtime.fusion.IntermediateDisposition¶
- geometry_producing: bool = False¶
- class vibespatial.StrokeOperation¶
Enum where members are also (and must be) strings
- BUFFER = 'buffer'¶
- OFFSET_CURVE = 'offset_curve'¶
- class vibespatial.StrokePrimitive¶
Enum where members are also (and must be) strings
- EXPAND_DISTANCES = 'expand_distances'¶
- EMIT_EDGE_FRAMES = 'emit_edge_frames'¶
- CLASSIFY_VERTICES = 'classify_vertices'¶
- EMIT_ARCS = 'emit_arcs'¶
- PREFIX_SUM = 'prefix_sum'¶
- SCATTER = 'scatter'¶
- EMIT_GEOMETRY = 'emit_geometry'¶
- vibespatial.benchmark_offset_curve(values, *, distance: float, join_style: str = 'mitre', dataset: str = 'offset-curve') StrokeBenchmark¶
- vibespatial.benchmark_point_buffer(values, *, distance: float, quad_segs: int = 16, dataset: str = 'point-buffer') StrokeBenchmark¶
- vibespatial.evaluate_geopandas_buffer(values, distance, *, quad_segs: int, cap_style, join_style, mitre_limit: float, single_sided: bool, prebuilt_owned=None)¶
- vibespatial.evaluate_geopandas_offset_curve(values, distance, *, quad_segs: int, join_style, mitre_limit: float)¶
- vibespatial.fusion_plan_for_stroke(operation: StrokeOperation | str)¶
- vibespatial.offset_curve_owned(values: collections.abc.Sequence[object | None] | numpy.ndarray, distance, *, quad_segs: int = 8, join_style: str = 'round', mitre_limit: float = 5.0) OffsetCurveKernelResult¶
- vibespatial.plan_stroke_kernel(operation: StrokeOperation | str) StrokeKernelPlan¶
- vibespatial.point_buffer_owned(values: collections.abc.Sequence[object | None] | numpy.ndarray, distance, *, quad_segs: int = 16) BufferKernelResult¶
- vibespatial.GEOMETRY_BUFFER_SCHEMAS: dict[GeometryFamily, GeometryBufferSchema]¶
- class vibespatial.BufferKind¶
Enum where members are also (and must be) strings
- VALIDITY = 'validity'¶
- TAG = 'tag'¶
- OFFSET = 'offset'¶
- COORDINATE = 'coordinate'¶
- BOUNDS = 'bounds'¶
- class vibespatial.BufferSpec¶
- name: str¶
- kind: BufferKind¶
- dtype: str¶
- level: str¶
- required: bool = True¶
- description: str = ''¶
- class vibespatial.GeometryBufferSchema¶
- family: GeometryFamily¶
- coord_precision: vibespatial.runtime.precision.PrecisionMode¶
- coord_layout: str¶
- validity: BufferSpec¶
- x: BufferSpec¶
- y: BufferSpec¶
- geometry_offsets: BufferSpec | None = None¶
- part_offsets: BufferSpec | None = None¶
- ring_offsets: BufferSpec | None = None¶
- bounds: BufferSpec | None = None¶
- supports_mixed_parent: bool = True¶
- empty_via_zero_span: bool = True¶
- notes: tuple[str, Ellipsis] = ()¶
- property coordinate_buffers: tuple[BufferSpec, BufferSpec]¶
- property offset_buffers: tuple[BufferSpec, Ellipsis]¶
- class vibespatial.GeometryFamily¶
Enum where members are also (and must be) strings
- POINT = 'point'¶
- LINESTRING = 'linestring'¶
- POLYGON = 'polygon'¶
- MULTIPOINT = 'multipoint'¶
- MULTILINESTRING = 'multilinestring'¶
- MULTIPOLYGON = 'multipolygon'¶
- vibespatial.get_geometry_buffer_schema(family: GeometryFamily | str) GeometryBufferSchema¶
- class vibespatial.BufferSharingMode¶
Enum where members are also (and must be) strings
- COPY = 'copy'¶
- SHARE = 'share'¶
- AUTO = 'auto'¶
- class vibespatial.DiagnosticEvent¶
- kind: DiagnosticKind¶
- detail: str¶
- residency: vibespatial.runtime.residency.Residency¶
- visible_to_user: bool = False¶
- elapsed_seconds: float = 0.0¶
- bytes_transferred: int = 0¶
- class vibespatial.DiagnosticKind¶
Enum where members are also (and must be) strings
- CREATED = 'created'¶
- TRANSFER = 'transfer'¶
- MATERIALIZATION = 'materialization'¶
- RUNTIME = 'runtime'¶
- CACHE = 'cache'¶
- class vibespatial.FamilyGeometryBuffer¶
-
- row_count: int¶
- x: numpy.ndarray¶
- y: numpy.ndarray¶
- geometry_offsets: numpy.ndarray¶
- empty_mask: numpy.ndarray¶
- part_offsets: numpy.ndarray | None = None¶
- ring_offsets: numpy.ndarray | None = None¶
- bounds: numpy.ndarray | None = None¶
- host_materialized: bool = True¶
- class vibespatial.GeoArrowBufferView¶
-
- x: numpy.ndarray¶
- y: numpy.ndarray¶
- geometry_offsets: numpy.ndarray¶
- empty_mask: numpy.ndarray¶
- part_offsets: numpy.ndarray | None = None¶
- ring_offsets: numpy.ndarray | None = None¶
- bounds: numpy.ndarray | None = None¶
- class vibespatial.MixedGeoArrowView¶
- validity: numpy.ndarray¶
- tags: numpy.ndarray¶
- family_row_offsets: numpy.ndarray¶
- families: dict[vibespatial.geometry.buffers.GeometryFamily, GeoArrowBufferView]¶
- class vibespatial.OwnedGeometryArray(validity: numpy.ndarray | None, tags: numpy.ndarray | None, family_row_offsets: numpy.ndarray | None, families: dict[vibespatial.geometry.buffers.GeometryFamily, FamilyGeometryBuffer], residency: vibespatial.runtime.residency.Residency = Residency.HOST, diagnostics: list[DiagnosticEvent] | None = None, runtime_history: list[vibespatial.runtime.RuntimeSelection] | None = None, geoarrow_backed: bool = False, shares_geoarrow_memory: bool = False, device_adopted: bool = False, device_state: OwnedGeometryDeviceState | None = None, device_metadata: DeviceMetadataState | None = None, _row_count: int | None = None)¶
Columnar geometry storage with optional device-resident metadata.
The three routing metadata arrays –
validity,tags, andfamily_row_offsets– are exposed as properties. When the array is device-resident, the host numpy copies may beNoneinternally; accessing any property lazily transfers from GPU to CPU, preserving full backward compatibility for host consumers while allowing GPU-only pipelines to avoid the D->H transfer entirely.- families¶
- residency¶
- diagnostics: list[DiagnosticEvent] = None¶
- runtime_history: list[vibespatial.runtime.RuntimeSelection] = None¶
- geoarrow_backed = False¶
- device_adopted = False¶
- device_state = None¶
- property validity: numpy.ndarray¶
- property tags: numpy.ndarray¶
- property family_row_offsets: numpy.ndarray¶
- property row_count: int¶
- family_has_rows(family: vibespatial.geometry.buffers.GeometryFamily) bool¶
Check whether family has at least one geometry row to process.
Reads from whichever side is authoritative:
device_statewhen populated, hostFamilyGeometryBufferotherwise. This avoids the bug where host stubs withhost_materialized=Falsereport empty offsets even when device buffers have real data.
- move_to(target: vibespatial.runtime.residency.Residency | str, *, trigger: vibespatial.runtime.residency.TransferTrigger | str, reason: str | None = None) OwnedGeometryArray¶
- record_runtime_selection(selection: vibespatial.runtime.RuntimeSelection) None¶
- cache_bounds(bounds: numpy.ndarray) None¶
- cache_device_bounds(family: vibespatial.geometry.buffers.GeometryFamily, bounds: vibespatial.cuda._runtime.DeviceArray) None¶
- classmethod concat(arrays: list[OwnedGeometryArray]) OwnedGeometryArray¶
Concatenate multiple OwnedGeometryArrays at the buffer level.
Stays host-resident and avoids any Shapely materialization. All input arrays are ensured to have host state before concatenation. Device state is not carried over; the caller can move the result to device if needed.
- diagnostics_report() dict[str, Any]¶
- take(indices: numpy.ndarray) OwnedGeometryArray¶
Return a new OwnedGeometryArray containing only the rows at indices.
Operates entirely at the buffer level – no Shapely round-trip. When the array is DEVICE-resident or indices are already on device (CuPy /
__cuda_array_interface__), dispatches todevice_take()to keep all gathering on GPU. Otherwise returns a HOST-resident array.
- device_take(indices) OwnedGeometryArray¶
Device-side take — all gathering stays on GPU.
Accepts numpy or CuPy indices/mask. Returns a DEVICE-resident OwnedGeometryArray with host buffers marked
host_materialized=False. The host side is lazily populated by_ensure_host_state()on demand.
- to_shapely() list[object | None]¶
- to_wkb(*, hex: bool = False) list[bytes | str | None]¶
- to_geoarrow(*, sharing: BufferSharingMode | str = BufferSharingMode.COPY) MixedGeoArrowView¶
- vibespatial.from_geoarrow(view: MixedGeoArrowView, *, residency: vibespatial.runtime.residency.Residency = Residency.HOST, sharing: BufferSharingMode | str = BufferSharingMode.COPY) OwnedGeometryArray¶
- vibespatial.from_shapely_geometries(geometries: list[object | None] | tuple[object | None, Ellipsis], *, residency: vibespatial.runtime.residency.Residency = Residency.HOST) OwnedGeometryArray¶
- vibespatial.from_wkb(values: list[bytes | str | None] | tuple[bytes | str | None, Ellipsis], *, on_invalid: str = 'raise', residency: vibespatial.runtime.residency.Residency = Residency.HOST) OwnedGeometryArray¶
- class vibespatial.GeoArrowBridgeBenchmark¶
- operation: str¶
- sharing: str¶
- geometry_type: str¶
- rows: int¶
- elapsed_seconds: float¶
- class vibespatial.GeoArrowCodecPlan¶
- operation: vibespatial.io.support.IOOperation¶
- selected_path: vibespatial.io.support.IOPathKind¶
- canonical_gpu: bool¶
- device_codec_available: bool¶
- zero_copy_adoption: bool¶
- lazy_materialization: bool¶
- reason: str¶
- class vibespatial.GeoParquetChunkPlan¶
- chunk_index: int¶
- row_groups: tuple[int, Ellipsis]¶
- estimated_rows: int¶
- class vibespatial.GeoParquetEngineBenchmark¶
- backend: str¶
- geometry_encoding: str¶
- rows: int¶
- chunk_rows: int | None¶
- chunk_count: int¶
- elapsed_seconds: float¶
- rows_per_second: float¶
- planning_elapsed_seconds: float = 0.0¶
- scan_elapsed_seconds: float = 0.0¶
- decode_elapsed_seconds: float = 0.0¶
- concat_elapsed_seconds: float = 0.0¶
- class vibespatial.GeoParquetEnginePlan¶
- selected_path: vibespatial.io.support.IOPathKind¶
- backend: str¶
- geometry_encoding: str | None¶
- chunk_count: int¶
- target_chunk_rows: int | None¶
- uses_row_group_pruning: bool¶
- reason: str¶
- class vibespatial.GeoParquetScanPlan¶
- selected_path: vibespatial.io.support.IOPathKind¶
- canonical_gpu: bool¶
- uses_pylibcudf: bool¶
- bbox_requested: bool¶
- metadata_summary_available: bool¶
- metadata_source: str | None¶
- uses_covering_bbox: bool¶
- uses_point_encoding_pushdown: bool¶
- row_group_pushdown: bool¶
- planner_strategy: str¶
- available_row_groups: int | None¶
- selected_row_groups: tuple[int, Ellipsis] | None¶
- decoded_row_fraction_estimate: float | None¶
- pruned_row_group_fraction: float | None¶
- reason: str¶
- class vibespatial.NativeGeometryBenchmark¶
- operation: str¶
- geometry_type: str¶
- implementation: str¶
- rows: int¶
- elapsed_seconds: float¶
- rows_per_second: float¶
- class vibespatial.WKBBridgeBenchmark¶
- operation: str¶
- geometry_type: str¶
- implementation: str¶
- rows: int¶
- fallback_rows: int¶
- elapsed_seconds: float¶
- rows_per_second: float¶
- class vibespatial.WKBBridgePlan¶
- operation: vibespatial.io.support.IOOperation¶
- selected_path: vibespatial.io.support.IOPathKind¶
- canonical_gpu: bool¶
- device_codec_available: bool¶
- reason: str¶
- vibespatial.benchmark_geoarrow_bridge(*, operation: str, geometry_type: str = 'point', rows: int = 100000, repeat: int = 20, seed: int = 0) list[GeoArrowBridgeBenchmark]¶
- vibespatial.benchmark_geoparquet_scan_engine(*, geometry_type: str = 'point', rows: int = 100000, geometry_encoding: str = 'geoarrow', chunk_rows: int | None = None, backend: str = 'cpu', repeat: int = 5, seed: int = 0) GeoParquetEngineBenchmark¶
- vibespatial.benchmark_native_geometry_codec(*, operation: str, geometry_type: str = 'point', rows: int = 100000, repeat: int = 5, seed: int = 0) list[NativeGeometryBenchmark]¶
- vibespatial.benchmark_wkb_bridge(*, operation: str, geometry_type: str = 'point', rows: int = 100000, repeat: int = 5, seed: int = 0) list[WKBBridgeBenchmark]¶
- vibespatial.decode_owned_geoarrow(view: vibespatial.geometry.owned.MixedGeoArrowView) vibespatial.geometry.owned.OwnedGeometryArray¶
- vibespatial.decode_wkb_owned(values: list[bytes | str | None] | tuple[bytes | str | None, Ellipsis], *, on_invalid: str = 'raise') vibespatial.geometry.owned.OwnedGeometryArray¶
- vibespatial.encode_owned_geoarrow(array: vibespatial.geometry.owned.OwnedGeometryArray) vibespatial.geometry.owned.MixedGeoArrowView¶
- vibespatial.encode_owned_geoarrow_array(array: vibespatial.geometry.owned.OwnedGeometryArray, *, field_name: str = 'geometry', crs: Any | None = None, interleaved: bool = True)¶
- vibespatial.encode_wkb_owned(array: vibespatial.geometry.owned.OwnedGeometryArray, *, hex: bool = False) list[bytes | str | None]¶
- vibespatial.geodataframe_from_arrow(table, *, geometry: str | None = None, to_pandas_kwargs: dict | None = None)¶
- vibespatial.geodataframe_to_arrow(df, *, index: bool | None = None, geometry_encoding: str = 'WKB', interleaved: bool = True, include_z: bool | None = None)¶
- vibespatial.geoseries_from_arrow(arr, **kwargs)¶
- vibespatial.geoseries_from_owned(array: vibespatial.geometry.owned.OwnedGeometryArray, *, name: str = 'geometry', crs: Any | None = None, interleaved: bool = True, use_device_array: bool = True, **kwargs)¶
- vibespatial.geoseries_to_arrow(series, *, geometry_encoding: str = 'WKB', interleaved: bool = True, include_z: bool | None = None)¶
- vibespatial.has_pyarrow_support() bool¶
- vibespatial.has_pylibcudf_support() bool¶
- vibespatial.plan_geoarrow_codec(operation: vibespatial.io.support.IOOperation | str) GeoArrowCodecPlan¶
- vibespatial.plan_geoparquet_engine(*, geo_metadata: dict[str, Any] | None, scan_plan: GeoParquetScanPlan, chunk_plans: tuple[GeoParquetChunkPlan, Ellipsis], target_chunk_rows: int | None) GeoParquetEnginePlan¶
- vibespatial.plan_geoparquet_scan(*, bbox: tuple[float, float, float, float] | None = None, geo_metadata: dict[str, Any] | None = None, metadata_summary: vibespatial.io.geoparquet_planner.GeoParquetMetadataSummary | None = None, planner_strategy: str = 'auto') GeoParquetScanPlan¶
- vibespatial.plan_wkb_bridge(operation: vibespatial.io.support.IOOperation | str) WKBBridgePlan¶
- vibespatial.plan_wkb_partition(values: list[bytes | str | None] | tuple[bytes | str | None, Ellipsis]) WKBPartitionPlan¶
- vibespatial.read_geoparquet(path, *, columns=None, storage_options=None, bbox=None, to_pandas_kwargs=None, **kwargs)¶
Read a GeoParquet file into a GeoDataFrame.
When PyArrow is available the reader plans row-group selection from spatial metadata, decodes WKB geometry on GPU when possible, and produces device-resident
OwnedGeometryArraywithout host round-trips.Aliased as
vibespatial.read_parquet().Parameters¶
- pathstr or Path
Path to the GeoParquet file.
- columnslist of str, optional
Subset of columns to read.
- storage_optionsdict, optional
Storage options for fsspec-compatible filesystems.
- bboxtuple of (minx, miny, maxx, maxy), optional
Spatial filter bounding box for row-group pruning.
- to_pandas_kwargsdict, optional
Extra keyword arguments passed to
pyarrow.Table.to_pandas().- **kwargs
Passed through to the underlying Parquet reader.
Returns¶
GeoDataFrame
- vibespatial.read_geoparquet_owned(path, *, columns=None, storage_options=None, bbox=None, chunk_rows: int | None = None, backend: str = 'auto', **kwargs) vibespatial.geometry.owned.OwnedGeometryArray¶
- vibespatial.write_geoparquet(df, path, *, index: bool | None = None, compression: str = 'snappy', geometry_encoding: str = 'WKB', schema_version: str | None = None, write_covering_bbox: bool = False, **kwargs) None¶
- class vibespatial.ShapefileIngestBenchmark¶
- implementation: str¶
- geometry_type: str¶
- rows: int¶
- elapsed_seconds: float¶
- rows_per_second: float¶
- class vibespatial.ShapefileIngestPlan¶
- implementation: str¶
- selected_strategy: str¶
- uses_pyogrio_container: bool¶
- uses_arrow_batch: bool¶
- uses_native_wkb_decode: bool¶
- reason: str¶
- class vibespatial.VectorFilePlan¶
- format: vibespatial.io.support.IOFormat¶
- operation: vibespatial.io.support.IOOperation¶
- selected_path: vibespatial.io.support.IOPathKind¶
- driver: str¶
- implementation: str¶
- reason: str¶
- vibespatial.benchmark_shapefile_ingest(*, geometry_type: str = 'point', rows: int = 100000, repeat: int = 5, seed: int = 0) list[ShapefileIngestBenchmark]¶
- vibespatial.plan_shapefile_ingest(*, prefer: str = 'arrow-wkb') ShapefileIngestPlan¶
- vibespatial.plan_vector_file_io(filename, *, operation: vibespatial.io.support.IOOperation | str, driver: str | None = None) VectorFilePlan¶
- vibespatial.read_shapefile_owned(source: str | pathlib.Path, *, bbox=None, columns=None, rows=None, **kwargs) ShapefileOwnedBatch¶
- vibespatial.read_vector_file(filename, bbox=None, mask=None, columns=None, rows=None, engine=None, **kwargs)¶
Read a vector file into a GeoDataFrame.
Supports Shapefile, GeoPackage, GeoJSON, and any format readable by pyogrio/fiona. For GeoJSON and Shapefile inputs the reader attempts a GPU-accelerated owned path first; other formats fall back to pyogrio.
Aliased as
vibespatial.read_file().Parameters¶
- filenamestr or Path
Path to the vector file.
- bboxtuple of (minx, miny, maxx, maxy), optional
Spatial filter bounding box.
- maskGeometry or GeoDataFrame, optional
Spatial filter mask geometry.
- columnslist of str, optional
Subset of columns to read.
- rowsint or slice, optional
Subset of rows to read.
- enginestr, optional
Force a specific I/O engine (
"pyogrio"or"fiona").- **kwargs
Passed through to the underlying engine.
Returns¶
GeoDataFrame
- vibespatial.write_vector_file(df, filename, driver=None, schema=None, index=None, **kwargs)¶
- class vibespatial.GeoJSONIngestBenchmark¶
- implementation: str¶
- geometry_type: str¶
- rows: int¶
- elapsed_seconds: float¶
- rows_per_second: float¶
- class vibespatial.GeoJSONIngestPlan¶
- implementation: str¶
- selected_strategy: str¶
- uses_stream_tokenizer: bool¶
- uses_pylibcudf: bool¶
- uses_native_geometry_assembly: bool¶
- reason: str¶
- vibespatial.benchmark_geojson_ingest(*, geometry_type: str = 'point', rows: int = 100000, repeat: int = 5, seed: int = 0) list[GeoJSONIngestBenchmark]¶
- vibespatial.plan_geojson_ingest(*, prefer: str = 'auto') GeoJSONIngestPlan¶
- vibespatial.read_geojson_owned(source: str | pathlib.Path, *, prefer: str = 'auto') GeoJSONOwnedBatch¶
- class vibespatial.GeoParquetMetadataSummary¶
- source: str¶
- row_group_rows: numpy.ndarray¶
- xmin: numpy.ndarray¶
- ymin: numpy.ndarray¶
- xmax: numpy.ndarray¶
- ymax: numpy.ndarray¶
- property row_group_count: int¶
- property total_rows: int¶
- class vibespatial.GeoParquetPlannerBenchmark¶
- strategy: str¶
- elapsed_seconds: float¶
- selected_row_groups: int¶
- decoded_row_fraction: float¶
- pruned_row_group_fraction: float¶
- class vibespatial.GeoParquetPruneResult¶
- strategy: str¶
- selected_row_groups: tuple[int, Ellipsis]¶
- decoded_row_count: int¶
- decoded_row_fraction: float¶
- pruned_row_group_fraction: float¶
- total_row_groups: int¶
- total_rows: int¶
- metadata_source: str¶
- vibespatial.benchmark_geoparquet_planner(summary: GeoParquetMetadataSummary, bbox: BBox, *, repeat: int = 5) tuple[GeoParquetPlannerBenchmark, Ellipsis]¶
- vibespatial.build_geoparquet_metadata_summary(*, source: str, row_group_rows: list[int] | tuple[int, Ellipsis] | numpy.ndarray, xmin: list[float] | tuple[float, Ellipsis] | numpy.ndarray, ymin: list[float] | tuple[float, Ellipsis] | numpy.ndarray, xmax: list[float] | tuple[float, Ellipsis] | numpy.ndarray, ymax: list[float] | tuple[float, Ellipsis] | numpy.ndarray) GeoParquetMetadataSummary¶
- vibespatial.select_row_groups(summary: GeoParquetMetadataSummary, bbox: BBox, *, strategy: str = 'auto') GeoParquetPruneResult¶
- vibespatial.IO_SUPPORT_MATRIX: dict[IOFormat, IOSupportEntry]¶
- class vibespatial.IOFormat¶
Enum where members are also (and must be) strings
- GEOARROW = 'geoarrow'¶
- GEOPARQUET = 'geoparquet'¶
- WKB = 'wkb'¶
- GEOJSON = 'geojson'¶
- SHAPEFILE = 'shapefile'¶
- GDAL_LEGACY = 'gdal-legacy'¶
- class vibespatial.IOOperation¶
Enum where members are also (and must be) strings
- READ = 'read'¶
- WRITE = 'write'¶
- SCAN = 'scan'¶
- DECODE = 'decode'¶
- ENCODE = 'encode'¶
- class vibespatial.IOPathKind¶
Enum where members are also (and must be) strings
- GPU_NATIVE = 'gpu_native'¶
- HYBRID = 'hybrid'¶
- FALLBACK = 'fallback'¶
- class vibespatial.IOPlan¶
-
- operation: IOOperation¶
- selected_path: IOPathKind¶
- canonical_gpu: bool¶
- reason: str¶
- class vibespatial.IOSupportEntry¶
-
- default_path: IOPathKind¶
- read_path: IOPathKind¶
- write_path: IOPathKind¶
- canonical_gpu: bool¶
- reason: str¶
- vibespatial.plan_io_support(format: IOFormat | str, operation: IOOperation | str) IOPlan¶
- vibespatial.compute_geometry_bounds(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray, *, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.CPU, precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO) numpy.ndarray¶
- vibespatial.compute_morton_keys(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray, *, dispatch_mode: vibespatial.runtime.ExecutionMode = ExecutionMode.CPU, precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO, bits: int = 16) numpy.ndarray¶
- vibespatial.compute_offset_spans(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray, *, level: str = 'geometry', dispatch_mode: vibespatial.runtime.ExecutionMode = ExecutionMode.CPU) dict[vibespatial.geometry.buffers.GeometryFamily, numpy.ndarray]¶
- vibespatial.compute_total_bounds(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray, *, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.CPU, precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO) tuple[float, float, float, float]¶
- class vibespatial.BinaryPredicateResult¶
- predicate: str¶
- values: numpy.ndarray¶
- row_count: int¶
- candidate_rows: numpy.ndarray¶
- coarse_true_rows: numpy.ndarray¶
- coarse_false_rows: numpy.ndarray¶
- runtime_selection: vibespatial.runtime.RuntimeSelection¶
- precision_plan: vibespatial.runtime.precision.PrecisionPlan¶
- robustness_plan: vibespatial.runtime.robustness.RobustnessPlan¶
- class vibespatial.NullBehavior¶
Enum where members are also (and must be) strings
- PROPAGATE = 'propagate'¶
- FALSE = 'false'¶
- vibespatial.benchmark_binary_predicate(predicate: str, left: PredicateInput, right: object | PredicateInput, **kwargs: Any) dict[str, int]¶
- vibespatial.evaluate_binary_predicate(predicate: str, left: PredicateInput, right: object | PredicateInput, *, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.AUTO, precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO, null_behavior: NullBehavior | str = NullBehavior.PROPAGATE, **kwargs: Any) BinaryPredicateResult¶
- vibespatial.evaluate_geopandas_binary_predicate(predicate: str, left: numpy.ndarray | vibespatial.geometry.owned.OwnedGeometryArray, right: object | numpy.ndarray | vibespatial.geometry.owned.OwnedGeometryArray, **kwargs: Any) numpy.ndarray | None¶
- vibespatial.supports_binary_predicate(name: str) bool¶
- vibespatial.EXECUTION_MODE_ENV_VAR = 'VIBESPATIAL_EXECUTION_MODE'¶
- class vibespatial.ExecutionMode¶
Enum where members are also (and must be) strings
- AUTO = 'auto'¶
- GPU = 'gpu'¶
- CPU = 'cpu'¶
- class vibespatial.RuntimeSelection¶
- requested: ExecutionMode¶
- selected: ExecutionMode¶
- reason: str¶
- vibespatial.get_requested_mode() ExecutionMode¶
Return the session-wide requested execution mode.
Priority: explicit set_execution_mode() > env var > AUTO.
- vibespatial.has_gpu_runtime() bool¶
- vibespatial.select_runtime(requested: ExecutionMode | str = ExecutionMode.AUTO) RuntimeSelection¶
- vibespatial.set_execution_mode(mode: ExecutionMode | str | None) None¶
Override the session execution mode. Pass None to clear.
Also invalidates the adaptive runtime snapshot cache so the planner re-evaluates on the next dispatch.
- class vibespatial.AdaptivePlan¶
- runtime_selection: vibespatial.runtime._runtime.RuntimeSelection¶
- dispatch_decision: vibespatial.runtime.crossover.DispatchDecision¶
- crossover_policy: vibespatial.runtime.crossover.CrossoverPolicy¶
- precision_plan: vibespatial.runtime.precision.PrecisionPlan¶
- variant: vibespatial.runtime.kernel_registry.KernelVariantSpec | None¶
- chunk_rows: int¶
- replan_after_chunk: bool¶
- diagnostics: tuple[str, Ellipsis]¶
- class vibespatial.AdaptiveRuntime¶
- device_snapshot: DeviceSnapshot | None = None¶
- plan(*, kernel_name: str, kernel_class: vibespatial.runtime.precision.KernelClass | str, workload: WorkloadProfile, requested_mode: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.AUTO, requested_precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO, variants: tuple[vibespatial.runtime.kernel_registry.KernelVariantSpec, Ellipsis] | None = None) AdaptivePlan¶
- class vibespatial.DeviceSnapshot¶
- backend: MonitoringBackend¶
- gpu_available: bool¶
- device_profile: vibespatial.runtime.precision.DevicePrecisionProfile¶
- sm_utilization_pct: float | None = None¶
- memory_utilization_pct: float | None = None¶
- device_name: str = 'unknown'¶
- reason: str = ''¶
- property underutilized: bool¶
- property under_memory_pressure: bool¶
- class vibespatial.MonitoringBackend¶
Enum where members are also (and must be) strings
- UNAVAILABLE = 'unavailable'¶
- NVML = 'nvml'¶
- class vibespatial.MonitoringSample¶
- sm_utilization_pct: float¶
- memory_utilization_pct: float¶
- device_name: str = 'unknown'¶
- class vibespatial.WorkloadProfile¶
- row_count: int¶
- geometry_families: tuple[str, Ellipsis] = ()¶
- mixed_geometry: bool = False¶
- current_residency: vibespatial.runtime.residency.Residency¶
- coordinate_stats: vibespatial.runtime.precision.CoordinateStats | None = None¶
- is_streaming: bool = False¶
- chunk_index: int = 0¶
- avg_vertices_per_geometry: float = 0.0¶
- vibespatial.capture_device_snapshot(*, probe: MonitoringProbe | None = None, gpu_available: bool | None = None, device_profile: vibespatial.runtime.precision.DevicePrecisionProfile | None = None) DeviceSnapshot¶
- vibespatial.get_cached_snapshot() DeviceSnapshot¶
Return a session-scoped DeviceSnapshot, creating it on first call.
- vibespatial.invalidate_snapshot_cache() None¶
Clear the cached snapshot so the next call to get_cached_snapshot() re-probes.
- vibespatial.plan_adaptive_execution(*, kernel_name: str, kernel_class: vibespatial.runtime.precision.KernelClass | str, workload: WorkloadProfile, requested_mode: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.AUTO, requested_precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO, device_snapshot: DeviceSnapshot | None = None, variants: tuple[vibespatial.runtime.kernel_registry.KernelVariantSpec, Ellipsis] | None = None) AdaptivePlan¶
- vibespatial.plan_dispatch_selection(*, kernel_name: str, kernel_class: vibespatial.runtime.precision.KernelClass | str, row_count: int, requested_mode: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.AUTO, gpu_available: bool | None = None) vibespatial.runtime._runtime.RuntimeSelection¶
Thin wrapper: plan dispatch and return just the RuntimeSelection.
- vibespatial.plan_kernel_dispatch(*, kernel_name: str, kernel_class: vibespatial.runtime.precision.KernelClass | str, row_count: int, requested_mode: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.AUTO, requested_precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO, geometry_families: tuple[str, Ellipsis] = (), mixed_geometry: bool = False, current_residency: vibespatial.runtime.residency.Residency = Residency.HOST, coordinate_stats: vibespatial.runtime.precision.CoordinateStats | None = None, is_streaming: bool = False, chunk_index: int = 0, gpu_available: bool | None = None) AdaptivePlan¶
Plan kernel dispatch with a cached device snapshot.
This is the recommended entry point for all GPU dispatch decisions. It gets (or creates) a session-scoped DeviceSnapshot, builds a WorkloadProfile, and calls plan_adaptive_execution().
- vibespatial.DEFAULT_CROSSOVER_POLICIES: dict[vibespatial.runtime.precision.KernelClass, int]¶
- class vibespatial.CrossoverPolicy¶
- kernel_name: str¶
- kernel_class: vibespatial.runtime.precision.KernelClass¶
- auto_min_rows: int¶
- reason: str¶
- class vibespatial.DispatchDecision¶
Enum where members are also (and must be) strings
- CPU = 'cpu'¶
- GPU = 'gpu'¶
- vibespatial.default_crossover_policy(kernel_name: str, kernel_class: vibespatial.runtime.precision.KernelClass | str) CrossoverPolicy¶
- vibespatial.select_dispatch_for_rows(*, requested_mode: vibespatial.runtime._runtime.ExecutionMode | str, row_count: int, policy: CrossoverPolicy, gpu_available: bool) DispatchDecision¶
- vibespatial.DETERMINISM_ENV_VAR = 'VIBESPATIAL_DETERMINISM'¶
- class vibespatial.DeterminismMode¶
Enum where members are also (and must be) strings
- DEFAULT = 'default'¶
- DETERMINISTIC = 'deterministic'¶
- class vibespatial.DeterminismPlan¶
- kernel_class: vibespatial.runtime.precision.KernelClass¶
- mode: DeterminismMode¶
- guarantee: ReproducibilityGuarantee¶
- stable_output_order: bool¶
- fixed_reduction_order: bool¶
- fixed_scan_order: bool¶
- floating_atomics_allowed: bool¶
- same_device_only: bool¶
- expected_max_overhead_factor: float¶
- reason: str¶
- class vibespatial.ReproducibilityGuarantee¶
Enum where members are also (and must be) strings
- NONE = 'none'¶
- SAME_DEVICE_BITWISE = 'same-device-bitwise'¶
- vibespatial.determinism_mode_from_env() DeterminismMode¶
- vibespatial.deterministic_mode_enabled(requested: DeterminismMode | str | None = None) bool¶
- vibespatial.normalize_determinism_mode(value: DeterminismMode | str | None) DeterminismMode¶
- vibespatial.select_determinism_plan(*, kernel_class: vibespatial.runtime.precision.KernelClass, requested: DeterminismMode | str | None = None) DeterminismPlan¶
- class vibespatial.DispatchEvent¶
- surface: str¶
- operation: str¶
- requested: vibespatial.runtime._runtime.ExecutionMode¶
- selected: vibespatial.runtime._runtime.ExecutionMode¶
- implementation: str¶
- reason: str¶
- detail: str = ''¶
- to_dict() dict[str, Any]¶
- vibespatial.clear_dispatch_events() None¶
- vibespatial.get_dispatch_events(*, clear: bool = False) list[DispatchEvent]¶
- vibespatial.record_dispatch_event(*, surface: str, operation: str, implementation: str, reason: str, detail: str = '', requested: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.AUTO, selected: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.CPU) DispatchEvent¶
- vibespatial.TRACE_WARNINGS_ENV_VAR = 'VIBESPATIAL_TRACE_WARNINGS'¶
- class vibespatial.ExecutionTraceContext¶
- pipeline: str¶
- transfers: list[TraceTransfer] = []¶
- record_transfer(transfer: TraceTransfer) None¶
- summary() dict[str, Any]¶
- exception vibespatial.VibeTraceWarning¶
Base class for warnings generated by user code.
- vibespatial.execution_trace(pipeline: str)¶
- vibespatial.get_active_trace() ExecutionTraceContext | None¶
- vibespatial.STRICT_NATIVE_ENV_VAR = 'VIBESPATIAL_STRICT_NATIVE'¶
- class vibespatial.FallbackEvent¶
- surface: str¶
- requested: vibespatial.runtime._runtime.ExecutionMode¶
- selected: vibespatial.runtime._runtime.ExecutionMode¶
- reason: str¶
- detail: str = ''¶
- pipeline: str = ''¶
- d2h_transfer: bool = False¶
- to_dict() dict[str, Any]¶
- exception vibespatial.StrictNativeFallbackError¶
Unspecified run-time error.
- vibespatial.clear_fallback_events() None¶
- vibespatial.get_fallback_events(*, clear: bool = False) list[FallbackEvent]¶
- vibespatial.record_fallback_event(*, surface: str, reason: str, detail: str = '', requested: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.AUTO, selected: vibespatial.runtime._runtime.ExecutionMode | str = ExecutionMode.CPU, pipeline: str = '', d2h_transfer: bool = False) FallbackEvent¶
- vibespatial.strict_native_mode_enabled() bool¶
- class vibespatial.FusionPlan¶
- stages: tuple[FusionStage, Ellipsis]¶
- peak_memory_target_ratio: float¶
- reason: str¶
- class vibespatial.FusionStage¶
- steps: tuple[PipelineStep, Ellipsis]¶
- disposition: IntermediateDisposition¶
- reason: str¶
- class vibespatial.IntermediateDisposition¶
Enum where members are also (and must be) strings
- EPHEMERAL = 'ephemeral'¶
- PERSIST = 'persist'¶
- BOUNDARY = 'boundary'¶
- class vibespatial.PipelineStep¶
- name: str¶
- output_name: str¶
- output_rows_follow_input: bool = True¶
- reusable_output: bool = False¶
- materializes_host_output: bool = False¶
- requires_stable_row_order: bool = False¶
- class vibespatial.StepKind¶
Enum where members are also (and must be) strings
- GEOMETRY = 'geometry'¶
- DERIVED = 'derived'¶
- FILTER = 'filter'¶
- ORDERING = 'ordering'¶
- INDEX = 'index'¶
- MATERIALIZATION = 'materialization'¶
- RASTER = 'raster'¶
- vibespatial.default_fusible_sequences() dict[str, tuple[PipelineStep, Ellipsis]]¶
- vibespatial.plan_fusion(steps: tuple[PipelineStep, Ellipsis] | list[PipelineStep]) FusionPlan¶
- vibespatial.NULL_BOUNDS¶
- class vibespatial.GeometryPresence¶
Enum where members are also (and must be) strings
- NULL = 'null'¶
- EMPTY = 'empty'¶
- VALUE = 'value'¶
- class vibespatial.GeometrySemantics¶
- presence: GeometryPresence¶
- geom_type: str | None = None¶
- vibespatial.classify_geometry(value: Any) GeometrySemantics¶
- vibespatial.is_null_like(value: Any) bool¶
- vibespatial.measurement_result_for_geometry(value: Any, *, kind: str) float | tuple[float, float, float, float]¶
- vibespatial.predicate_result_for_pair(left: Any, right: Any) bool | None¶
- vibespatial.unary_result_for_missing_input(value: Any) None¶
- vibespatial.DEFAULT_CONSUMER_PROFILE¶
- vibespatial.DEFAULT_DATACENTER_PROFILE¶
- class vibespatial.CompensationMode¶
Enum where members are also (and must be) strings
- NONE = 'none'¶
- CENTERED = 'centered'¶
- KAHAN = 'kahan'¶
- DOUBLE_SINGLE = 'double-single'¶
- class vibespatial.CoordinateStats¶
- max_abs_coord: float = 0.0¶
- span: float = 0.0¶
- property needs_centering: bool¶
- class vibespatial.DevicePrecisionProfile¶
- name: str¶
- fp64_to_fp32_ratio: float¶
- property favors_native_fp64: bool¶
- class vibespatial.KernelClass¶
Enum where members are also (and must be) strings
- COARSE = 'coarse'¶
- METRIC = 'metric'¶
- PREDICATE = 'predicate'¶
- CONSTRUCTIVE = 'constructive'¶
- class vibespatial.PrecisionMode¶
Enum where members are also (and must be) strings
- AUTO = 'auto'¶
- FP32 = 'fp32'¶
- FP64 = 'fp64'¶
- class vibespatial.PrecisionPlan¶
- storage_precision: PrecisionMode¶
- compute_precision: PrecisionMode¶
- kernel_class: KernelClass¶
- compensation: CompensationMode¶
- refinement: RefinementMode¶
- center_coordinates: bool¶
- reason: str¶
- class vibespatial.RefinementMode¶
Enum where members are also (and must be) strings
- NONE = 'none'¶
- SELECTIVE_FP64 = 'selective-fp64'¶
- EXACT = 'exact'¶
- vibespatial.normalize_precision_mode(value: PrecisionMode | str) PrecisionMode¶
- vibespatial.select_precision_plan(*, runtime_selection: vibespatial.runtime._runtime.RuntimeSelection, kernel_class: KernelClass, requested: PrecisionMode | str = PrecisionMode.AUTO, coordinate_stats: CoordinateStats | None = None, device_profile: DevicePrecisionProfile | None = None) PrecisionPlan¶
- class vibespatial.Residency¶
Enum where members are also (and must be) strings
- HOST = 'host'¶
- DEVICE = 'device'¶
- class vibespatial.ResidencyPlan¶
-
- trigger: TransferTrigger¶
- transfer_required: bool¶
- visible_to_user: bool¶
- zero_copy_eligible: bool¶
- reason: str¶
- class vibespatial.TransferTrigger¶
Enum where members are also (and must be) strings
- USER_MATERIALIZATION = 'user-materialization'¶
- EXPLICIT_RUNTIME_REQUEST = 'explicit-runtime-request'¶
- UNSUPPORTED_GPU_PATH = 'unsupported-gpu-path'¶
- INTEROP_VIEW = 'interop-view'¶
- vibespatial.select_residency_plan(*, current: Residency | str, target: Residency | str, trigger: TransferTrigger | str) ResidencyPlan¶
- class vibespatial.PredicateFallback¶
Enum where members are also (and must be) strings
- NONE = 'none'¶
- SELECTIVE_FP64 = 'selective-fp64'¶
- EXPANSION_ARITHMETIC = 'expansion-arithmetic'¶
- RATIONAL_RECONSTRUCTION = 'rational-reconstruction'¶
- class vibespatial.RobustnessGuarantee¶
Enum where members are also (and must be) strings
- EXACT = 'exact'¶
- BOUNDED_ERROR = 'bounded-error'¶
- BEST_EFFORT = 'best-effort'¶
- class vibespatial.RobustnessPlan¶
- kernel_class: vibespatial.runtime.precision.KernelClass¶
- guarantee: RobustnessGuarantee¶
- predicate_fallback: PredicateFallback¶
- topology_policy: TopologyPolicy¶
- handles_nulls: bool¶
- handles_empties: bool¶
- reason: str¶
- class vibespatial.TopologyPolicy¶
Enum where members are also (and must be) strings
- PRESERVE = 'preserve'¶
- SNAP_GRID = 'snap-grid'¶
- BEST_EFFORT = 'best-effort'¶
- vibespatial.select_robustness_plan(*, kernel_class: vibespatial.runtime.precision.KernelClass, precision_plan: vibespatial.runtime.precision.PrecisionPlan, null_state: vibespatial.runtime.nulls.GeometryPresence | None = None, empty_state: vibespatial.runtime.nulls.GeometryPresence | None = None) RobustnessPlan¶
- class vibespatial.BoundsPairBenchmark¶
- dataset: str¶
- rows: int¶
- tile_size: int¶
- elapsed_seconds: float¶
- pairs_examined: int¶
- candidate_pairs: int¶
- class vibespatial.CandidatePairs¶
MBR candidate pair result with optional device-resident arrays.
When produced by the GPU path,
_device_left_indicesand_device_right_indiceshold CuPy device arrays. The publicleft_indicesandright_indicesproperties lazily materialise host (NumPy) arrays on first access, following the same pattern asFlatSpatialIndex.- left_bounds: numpy.ndarray¶
- right_bounds: numpy.ndarray¶
- pairs_examined: int¶
- tile_size: int¶
- same_input: bool¶
- property left_indices: numpy.ndarray¶
Lazily materialise host left_indices from device (ADR-0005).
- property right_indices: numpy.ndarray¶
Lazily materialise host right_indices from device (ADR-0005).
- property device_left_indices¶
CuPy device array of left indices, or None if CPU-produced.
- property device_right_indices¶
CuPy device array of right indices, or None if CPU-produced.
- property count: int¶
- class vibespatial.FlatSpatialIndex¶
- geometry_array: vibespatial.geometry.owned.OwnedGeometryArray¶
- bounds: numpy.ndarray¶
- total_bounds: tuple[float, float, float, float]¶
- regular_grid: RegularGridRectIndex | None = None¶
- device_morton_keys: object = None¶
- device_order: object = None¶
- property order: numpy.ndarray¶
Lazily materialise host order array from device (ADR-0005).
- property morton_keys: numpy.ndarray¶
Lazily materialise host morton_keys array from device (ADR-0005).
- property size: int¶
- query_bounds(bounds: tuple[float, float, float, float]) numpy.ndarray¶
- query(other: vibespatial.geometry.owned.OwnedGeometryArray, *, tile_size: int = 256) CandidatePairs¶
- class vibespatial.SegmentCandidatePairs¶
Segment candidate pairs with lazy device-to-host materialization.
When produced by the GPU path,
_device_*fields hold CuPy device arrays and_host_*fields areNone. The public properties lazily callcp.asnumpy()on first host access, following theCandidatePairspattern (ADR-0005).- pairs_examined: int¶
- property left_rows: numpy.ndarray¶
Lazily materialise host left_rows from device (ADR-0005).
- property left_segments: numpy.ndarray¶
Lazily materialise host left_segments from device (ADR-0005).
- property right_rows: numpy.ndarray¶
Lazily materialise host right_rows from device (ADR-0005).
- property right_segments: numpy.ndarray¶
Lazily materialise host right_segments from device (ADR-0005).
- property device_left_rows¶
CuPy device array of left row indices, or None if CPU-produced.
- property device_left_segments¶
CuPy device array of left segment indices, or None if CPU-produced.
- property device_right_rows¶
CuPy device array of right row indices, or None if CPU-produced.
- property device_right_segments¶
CuPy device array of right segment indices, or None if CPU-produced.
- property count: int¶
- class vibespatial.SegmentFilterBenchmark¶
- rows_left: int¶
- rows_right: int¶
- naive_segment_pairs: int¶
- filtered_segment_pairs: int¶
- elapsed_seconds: float¶
- class vibespatial.SegmentMBRTable¶
Segment MBR table with optional device-resident arrays.
When produced by the GPU path, arrays are CuPy device arrays and
residencyisResidency.DEVICE. The public propertiesrow_indices,segment_indices, andboundsreturn the underlying arrays as-is (device or host). Useto_host()to get a copy with NumPy arrays on the host side.- row_indices: object¶
- segment_indices: object¶
- bounds: object¶
- residency: vibespatial.runtime.residency.Residency¶
- property count: int¶
- to_host() SegmentMBRTable¶
Return a host-resident copy (NumPy arrays).
If already host-resident, returns self.
- vibespatial.benchmark_bounds_pairs(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray, *, dataset: str, tile_size: int = 256) BoundsPairBenchmark¶
- vibespatial.benchmark_segment_filter(left: vibespatial.geometry.owned.OwnedGeometryArray, right: vibespatial.geometry.owned.OwnedGeometryArray, *, tile_size: int = 512) SegmentFilterBenchmark¶
- vibespatial.build_flat_spatial_index(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray, *, runtime_selection: vibespatial.runtime.RuntimeSelection | None = None) FlatSpatialIndex¶
- vibespatial.extract_segment_mbrs(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray) SegmentMBRTable¶
Extract per-segment MBRs from all line/polygon geometries.
Dispatches to GPU when available, falling back to CPU otherwise. The GPU path returns device-resident CuPy arrays (no D->H transfer).
- vibespatial.generate_bounds_pairs(left: vibespatial.geometry.owned.OwnedGeometryArray, right: vibespatial.geometry.owned.OwnedGeometryArray | None = None, *, tile_size: int = 256, include_self: bool = False) CandidatePairs¶
- vibespatial.generate_segment_mbr_pairs(left: vibespatial.geometry.owned.OwnedGeometryArray, right: vibespatial.geometry.owned.OwnedGeometryArray, *, tile_size: int = 512) SegmentCandidatePairs¶
Generate candidate segment pairs by MBR overlap filtering.
Dispatches to GPU when available. The GPU path uses the existing sweep-sort overlap kernel (
_generate_bounds_pairs_gpu) on segment bounds, returning device-resident CuPy arrays (no eager D->H transfer).
- class vibespatial.SegmentIntersectionBenchmark¶
- rows_left: int¶
- rows_right: int¶
- candidate_pairs: int¶
- disjoint_pairs: int¶
- proper_pairs: int¶
- touch_pairs: int¶
- overlap_pairs: int¶
- ambiguous_pairs: int¶
- elapsed_seconds: float¶
- class vibespatial.SegmentIntersectionCandidates¶
- left_rows: numpy.ndarray¶
- left_segments: numpy.ndarray¶
- left_lookup: numpy.ndarray¶
- right_rows: numpy.ndarray¶
- right_segments: numpy.ndarray¶
- right_lookup: numpy.ndarray¶
- pairs_examined: int¶
- tile_size: int¶
- property count: int¶
- class vibespatial.SegmentIntersectionKind¶
Enum where members are also (and must be) ints
- DISJOINT = 0¶
- PROPER = 1¶
- TOUCH = 2¶
- OVERLAP = 3¶
- class vibespatial.SegmentIntersectionResult¶
Segment intersection results with lazy host materialization.
When produced by the GPU pipeline, all 14 result arrays live in
device_stateand host numpy arrays are lazily copied on first property access. GPU-only consumers (e.g.build_gpu_split_events) that read onlydevice_state,candidate_pairs,count,runtime_selection,precision_plan, androbustness_plannever trigger device-to-host copies.- candidate_pairs: int¶
- runtime_selection: vibespatial.runtime.RuntimeSelection¶
- precision_plan: vibespatial.runtime.precision.PrecisionPlan¶
- robustness_plan: vibespatial.runtime.robustness.RobustnessPlan¶
- device_state: SegmentIntersectionDeviceState | None = None¶
- property left_rows: numpy.ndarray¶
- property left_segments: numpy.ndarray¶
- property left_lookup: numpy.ndarray¶
- property right_rows: numpy.ndarray¶
- property right_segments: numpy.ndarray¶
- property right_lookup: numpy.ndarray¶
- property kinds: numpy.ndarray¶
- property point_x: numpy.ndarray¶
- property point_y: numpy.ndarray¶
- property overlap_x0: numpy.ndarray¶
- property overlap_y0: numpy.ndarray¶
- property overlap_x1: numpy.ndarray¶
- property overlap_y1: numpy.ndarray¶
- property ambiguous_rows: numpy.ndarray¶
- property count: int¶
- kind_names() list[str]¶
- class vibespatial.SegmentTable¶
- row_indices: numpy.ndarray¶
- part_indices: numpy.ndarray¶
- ring_indices: numpy.ndarray¶
- segment_indices: numpy.ndarray¶
- x0: numpy.ndarray¶
- y0: numpy.ndarray¶
- x1: numpy.ndarray¶
- y1: numpy.ndarray¶
- bounds: numpy.ndarray¶
- property count: int¶
- vibespatial.benchmark_segment_intersections(left: vibespatial.geometry.owned.OwnedGeometryArray, right: vibespatial.geometry.owned.OwnedGeometryArray, *, tile_size: int = 512, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.AUTO) SegmentIntersectionBenchmark¶
- vibespatial.classify_segment_intersections(left: vibespatial.geometry.owned.OwnedGeometryArray, right: vibespatial.geometry.owned.OwnedGeometryArray, *, candidate_pairs: SegmentIntersectionCandidates | None = None, tile_size: int = 512, dispatch_mode: vibespatial.runtime.ExecutionMode | str = ExecutionMode.AUTO, precision: vibespatial.runtime.precision.PrecisionMode | str = PrecisionMode.AUTO) SegmentIntersectionResult¶
Classify all segment-segment intersections between two geometry arrays.
Parameters¶
- left, rightOwnedGeometryArray
Input geometry arrays (linestring, polygon, or multi-variants).
- candidate_pairsSegmentIntersectionCandidates, optional
Pre-computed candidate pairs. If None, candidates are generated internally (GPU-native O(n log n) when GPU mode, tiled CPU otherwise).
- tile_sizeint
Tile size for CPU candidate generation (ignored in GPU mode).
- dispatch_modeExecutionMode
Force GPU, CPU, or AUTO dispatch.
- precisionPrecisionMode
Force fp32, fp64, or AUTO precision.
Returns¶
- SegmentIntersectionResult
Classification of all candidate segment pairs.
- vibespatial.extract_segments(geometry_array: vibespatial.geometry.owned.OwnedGeometryArray) SegmentTable¶
Extract segments from geometry array on CPU (legacy path).
- vibespatial.generate_segment_candidates(left: vibespatial.geometry.owned.OwnedGeometryArray, right: vibespatial.geometry.owned.OwnedGeometryArray, *, tile_size: int = 512) SegmentIntersectionCandidates¶