Image Analysis#

class payloadcomputerdroneprojekt.image_analysis.ImageAnalysis(config: dict, camera: AbstractCamera, comms: Communications)[source]#

Bases: object

Handles image analysis for drone payload computer, including color and shape detection, object localization, and image quality assessment.

Parameters:
  • config (dict) – Configuration dictionary for image analysis parameters.

  • camera (AbstractCamera) – Camera object implementing AbstractCamera.

  • comms (Communications) – Communications object for drone telemetry.

add_lat_lon(obj: dict, rotation: List[float] | ndarray, height: float, image_size: Tuple[int, int], loc_to_global: Callable[[float, float], Any]) None[source]#

Add latitude and longitude to an object based on its local offset.

Parameters:
  • obj (dict) – Object dictionary.

  • rotation (list or np.array) – Rotation vector.

  • height (float) – Height value.

  • image_size (tuple) – Image size (height, width).

  • loc_to_global (callable) – Function to convert local to global coordinates.

Returns:

None

compute_image(image: ndarray, item: DataItem | None = None, height: float = 1) Tuple[List[dict], ndarray][source]#

Filter image for defined colors and detect objects.

Parameters:

image (np.array) – Input image.

Returns:

Tuple of (list of detected objects, shape-filtered image).

Return type:

tuple[list[dict], np.array]

detect_obj(objects: List[dict], filtered_image: Dict[str, Any], height: float = 1) None[source]#

Detect objects in a filtered image and append to objects list.

Parameters:
  • objects (list[dict]) – List to append detected objects.

  • filtered_image (dict) – Dictionary with color and filtered image.

  • height (float) – Minimum height for object detection.

Returns:

None

filter_color(image: ndarray, color: str, shape_mask: ndarray | None = None) ndarray[source]#

Filter the image for a specific color.

Parameters:
  • image (np.array) – Input image.

  • color (str) – Color name (must be in defined colors).

  • shape_mask (np.array or None) – Optional shape mask to apply.

Returns:

Filtered image.

Return type:

np.array

Raises:

IndexError – If color is not defined.

filter_colors(image: ndarray) Tuple[List[Dict[str, Any]], ndarray][source]#

Filter the image for each defined color and for the shape color.

Parameters:

image (np.array) – Input image.

Returns:

Tuple of (list of color-filtered dicts, shape-filtered image).

Return type:

tuple[list[dict], np.array]

filter_shape_color(image: ndarray) ndarray[source]#

Filter the image for the shape color.

Parameters:

image (np.array) – Input image.

Returns:

Shape-filtered image.

Return type:

np.array

find_code(obj: dict, shape_image: ndarray, height: float = 1) bool[source]#

Find code elements (e.g., QR code-like) inside the object.

Parameters:
  • obj (dict) – Object dictionary.

  • shape_image (np.array) – Shape-filtered image.

  • height (float) – Minimum height for code element detection.

Returns:

True if code found, False otherwise.

Return type:

bool

get_all_obj_list() List[dict][source]#

Get a list of all detected objects with color and shape.

Returns:

List of object dictionaries.

Return type:

list[dict]

get_closest_element(image: ndarray, color: str, shape: str | None, item: DataItem | None = None, height: float = 1) dict | None[source]#

Get the closest detected object of a given color and shape.

Parameters:
  • image (np.array) – Input image.

  • color (str) – Color name.

  • shape (str) – Shape name.

Returns:

Closest object dictionary or None.

Return type:

dict or None

get_color_obj_list(color: str) List[dict][source]#

Get a list of all objects for a given color.

Parameters:

color (str) – Color name.

Returns:

List of object dictionaries.

Return type:

list[dict]

async get_current_offset_closest(color: str, shape: str, yaw_zero: bool = True, indoor: bool = False) Tuple[List[float] | None, float | None, float | None][source]#

Get the offset from the drone to the closest object of a given color and shape.

Parameters:
  • color (str) – Color name to detect.

  • shape (str) – Shape name to detect.

  • yaw_zero (bool) – If True, set yaw to zero for calculation.

Returns:

Tuple (offset [x, y], height, yaw offset).

Return type:

tuple or (None, None, None) if not found

get_filtered_objs() Dict[str, Dict[str, List[dict]]][source]#

Get a dictionary of all filtered objects.

Returns:

Dictionary of filtered objects by color and shape.

Return type:

dict[str, dict[str, list]]

get_local_offset(obj: dict, rotation: List[float] | ndarray, height: float, image_size: Tuple[int, int]) ndarray[source]#

Get the local offset of an object in the drone’s coordinate system.

Parameters:
  • obj (dict) – Object dictionary.

  • rotation (list or np.array) – Rotation vector.

  • height (float) – Height value.

  • image_size (tuple) – Image size (height, width).

Returns:

Local offset [x, y, z].

Return type:

np.array

get_matching_objects(color: str, shape: str | None = None) List[dict][source]#

Get all matching filtered objects for a color and optional shape.

Parameters:
  • color (str) – Color name.

  • shape (str or None) – Shape name (optional).

Returns:

List of object dictionaries.

Return type:

list[dict]

get_shape(obj: dict, shape_image: ndarray, height: float = 1) str | bool[source]#

Detect the shape inside the object boundaries.

Parameters:
  • obj (dict) – Object dictionary.

  • shape_image (np.array) – Shape-filtered image.

  • height (float) – Minimum height for shape detection.

Returns:

Shape name (“Dreieck”, “Rechteck”, “Kreis”) or False.

Return type:

str or bool

async image_loop() None[source]#

Main logic for per-frame image analysis.

Returns:

None

static quality_of_image(image: ndarray) float[source]#

Assess the quality of an image using Laplacian variance.

Parameters:

image (np.array) – Image array.

Returns:

Laplacian variance (higher is sharper).

Return type:

float

start_cam(images_per_second: float = 1.0) bool[source]#

Start capturing and saving images asynchronously.

Parameters:

images_per_second (float) – Images per second (frames per second).

Returns:

True if camera started successfully, False otherwise.

Return type:

bool

stop_cam() bool[source]#

Stop capturing and saving images.

Returns:

True if stopped successfully, False otherwise.

Return type:

bool

async take_image() bool[source]#

Take a single image asynchronously.

Returns:

True if successful, False otherwise.

Return type:

bool