Open Menu
Intelligent parking spot occupancy detection

Intelligent parking spot occupancy detection

User Name

Written by

Izan Leal

August 27, 2025

Table of Contents

  1. Automatic parking spots occupancy detection
  2. A closer look at our approach
  3. Conclusions and future work

Automatic parking spots occupancy detection

During a collaboration with a client in the automotive industry, we were asked to improve an AI system used to detect parked vehicles and track occupancy in real-time. While the car detection module was partially functional, the definition of individual parking spots was still being done manually, a process that was time-consuming, error-prone, and impossible to scale across multiple installations.

Most commercial solutions rely on manual calibration or fixed grid assumptions, and often underperform in the presence of camera distortion or non-standard lot layouts. Pretrained models or geometric heuristics can help, but they lack generalization and often require site-specific training or human intervention.

In our case, the spots were not always rectangular, horizontal, or closed boxes. Instead, we had to work with partial lane markings, sometimes just parallel lines, and sometimes no rear boundary.

We saw an opportunity to combine classical Computer Vision with AI-assisted modules, building a robust proof of concept grounded in image pre-processing and geometry.


A closer look at our approach

To initiate the development of the slot detector, we began by thoroughly analyzing the reference images provided by the client, along with the accompanying annotation dictionary used to describe the parking environment. This initial assessment allowed us to establish several key premises that informed and guided our approach moving forward:

Parking spots were not always closed rectangular boxes—some were open-ended—but in every case, lane markings were expected to appear on at least the lateral sides of the car. These markings typically had a strong color contrast compared to the surrounding floor surface. Additionally, due to the nature of the wide-angle cameras deployed, the images suffered from noticeable lens distortion, and the orientation of spots could vary, including horizontal or non-axis-aligned configurations.

Sample Figure 1: Sample parking spots image.

Our first goal was to recover the actual layout of the scene by correcting for lens distortion. We used intrinsic camera parameters to apply a distortion-removal algorithm, resulting in a geometrically accurate representation of the environment. This correction was critical, as many of the subsequent operations—particularly those related to line detection and alignment—depend on preserving true perspective geometry.

Dewarped Figure 2: Dewarped image.

Once we had clean, rectified images, we enhanced the visual features of interest by applying color correction to emphasize the contrast between spot lines and the floor. We then used a Canny edge detection algorithm to identify prominent contours in the scene, revealing the rough structure of the parking lot.

Edges Figure 3: Edges from Canny.

From this edge map, we applied a line detection algorithm based on the Hough Transform to extract linear features that likely corresponded to parking boundaries. We filtered out short segments and noise, retaining only lines of a minimum meaningful length.

Raw lines Figure 4: Lines from Hough.

To improve stability, we merged parallel and adjacent lines and joined colinear segments, further refining the result. The objective was to isolate the longest and most significant structural lines delineating individual parking spaces.

Filtered lines Figure 5: Filtered lines.

With this refined set of lines, we focused primarily on vertical boundaries, as they typically define the separations between adjacent parking spots. These vertical lines were anchors to infer trapezoidal regions representing parking areas. Horizontal lines were utilized to subdivide these regions into distinct parking spots further.

To close these shapes, we followed a line-extension strategy that ensured the non-vertical sides of the trapezoids were fully horizontal. This allowed us to construct consistent spot regions, even in the presence of incomplete or faded markings.

The final parking spot coordinates were then preserved in the undistorted coordinate space. However, we also implemented a reverse mapping option to return them to the original distorted image plane. This ensured compatibility with downstream components of the client’s existing software stack.

Detected spots Figure 6: Predicted spots overlay.

Spots in original coordinates (2592x1944 image size):

Spot 0: [[987, 233], [1057, 353], [1406, 362], [1204, 237]]
Spot 1: [[1057, 353], [1266, 737], [1859, 698], [1406, 362]]
Spot 2: [[765, 235], [702, 356], [1057, 353], [987, 233]]
Spot 3: [[702, 356], [522, 751], [1275, 757], [1057, 353]]
Spot 4: [[537, 246], [381, 369], [702, 356], [765, 235]]
Spot 5: [[381, 369], [10, 723], [515, 768], [702, 356]]

Combining geometric pre-processing, classical computer vision, and smart heuristics, we developed a robust method for automatic parking spot calibration, eliminating the need for manual spot definition.

⬆ Back


Conclusions and future work

With this proof of concept successfully completed, we see strong potential for evolving the system into a fully autonomous smart parking solution—one that requires no manual intervention at any stage, from layout calibration to vehicle monitoring. The improved detection model and automated spot definition pipeline lay the foundation for a platform that can: Continuously monitor parking occupancy in real time.

  • Dynamically recalibrate spot layouts if the camera position or markings change.
  • Integrate with back-end systems to enable services like automated ticketing, digital signage, or mobile parking apps.
  • Scale across multiple locations with minimal configuration effort.

This vision aligns perfectly with the smart city and mobility-as-a-service (MaaS) trends, where infrastructure is expected to adapt automatically to changing conditions and user needs.

Contact us to learn how to bring intelligent, compliant video AI into your business operations.

⬆ Back