Our Mission

SightLinks is dedicated to revolutionizing accessibility mapping through advanced computer vision. We transform satellite imagery into actionable accessibility data, making urban environments more navigable for wheelchair users and others with mobility needs. Our platform combines cutting-edge AI with user-friendly interfaces to bridge the gap between satellite data and real-world accessibility solutions.

Problem Statement

Impaired individuals face significant navigation challenges due to the lack of support tailored to their specific needs. Impairments come in various forms, and it is not often possible to prepare for them accordingly. To enhance the mobility of such individuals, detection and mapping of accessibility features is crucial. However, manual methods are time and resource exhaustive, while existing automated solutions struggle with complexity and the need for georeferencing. This is especially an issue for organisations and charities, where the large scale amplifies the limitations.

Our Solution

To tackle this, we have developed SightLinks, a computer vision system that automates the detection and mapping of accessibility features. Our approach uses optimisations through image segmentation, classification screening, and memory management techniques to efficiently process large-scale georeferenced datasets. This lowers the load on our core YOLO detection model when locating features, without losing relevant information and compromising accuracy.

Achievement & Impact

A key innovation of our system is its comprehensive dataset of over 23,000 manually annotated pedestrian crossings, making it the largest open-source collection of its kind. This extensive dataset, combined with our modular pipeline structure, makes SightLinks easily integrable into larger accessibility mapping systems. By automating the detection process, organisations can more effectively distribute limited mapping resources. Currently SightLinks can only recognise crossings, but can be easily adjusted to recognise anything so long as sufficient training data is available. Our system is very scalable, and already supports most common georeferenced formats such as GeoTIFF and World files. With our system at its foundation, detection and mapping processes can be simplified to simply capturing satellite imagery with drones, and having relevant features detected with minimal human intervention.

Key Features

Hybrid Detection System

Two-stage detection pipeline combining efficient classification screening with precise YOLO-based object detection

Optimized Processing

Memory-efficient pipeline with image segmentation and classification screening for large-scale datasets

Precise Georeferencing

Accurate transformation of pixel coordinates to real-world geographic positions for mapping applications

Project Timeline

Comprehensive breakdown of project phases and milestones over 23 weeks

Task W1 W2 W3 W4 W5 W6 W7 W8 W9 W10 W11 W12 W13 W14 W15 W16 W17 W18 W19 W20 W21 W22 W23
Project Introduction
Project Redefinition
Requirements & Planning
Pipeline Planning & UX Research
MVP Development
Dataset Creation
Object Detection Enhancement
Pitch Presentations
Model Optimization
Frontend Development
Backend Optimization
Developer Utilities
Backend Deployment
Client Demo & Feedback
Final Integration & Packaging
Documentation & Handoff

Project Demo

Watch our comprehensive demo showcasing SightLinks's key features and functionalities

Our Team

Kostas Demiris

Kostas Demiris

Team Lead

Researcher

ML Engineer

Aiden Li

Aiden (Yiliu) Li

Frontend Developer

Researcher

ML Engineer

Edward Tandanu

Edward Tandanu

Client Liaison

Backend Developer

Data Engineer

Arif Imtiaz Khan

Arif Imtiaz Khan

Data Engineer

Literature Researcher

Backend Developer