Widget Implementation

This section gives a brief overview of the implementation of our features on the frontend widget. As a high-level overview, the widget uses an event-driven architecture with reactive front-end components that communicates with our database via our custom API. It manages state by handling user interactions and form submissions.

Frameworks and APIs Used

Our system's whole purpose is to make mapping technology more accessible. That is why we decided to use a variety of already existing mapping technologies and frameworks as well as established frontend designs for a clean final look. Note that all the required API keys should be defined in the .env.local file.

Maps and Location Services

Google Maps

Google Maps API

The main component in our widget is a GoogleMaps map component with functionalities such as street view and satellite view. In each datapoint we also provide a link to Google Maps navigation for easy way-finding. For our clustering feature (where we cluster markers for better view), we also use the @googlemaps/markerclusterer library.

what3words

what3words API

Our system is integrated with what3words functionality, allowing users to get the what3words tag for any data point location, for easy memorization and also in case of emergency, quick communication. We also use the w3w API to help us overlay the gird on our map, which we then use to showcase spatial information about a data point.

APIs

Azure

Azure Speech Services

We use this API for our text-to-speech and speech-to-text functionality, allowing navigation to data points hands-free.

Custom Backend API

Our own API to access our database, also used to report additional accessibility features to the database.

Frameworks and UI Libraries

React

React

Used to build a responsive and scalable widget with nice reusable components.

UI Libraries

We use Next.js for routing and performance, Tailwind CSS for styling, Shadcn UI for accessible components, and Lucide for intuitive icons, making our design clean.

Location and Mapping Functionality

This section describes how our widget handles the display of the map and user on the widget.

User Location and Map Display

When the widget first loads, it tries using the browser's API to get the initial position and updates the user location. This happens in the following useEffect hook:

useEffect(() => {
  if (navigator.geolocation) {
    navigator.geolocation.getCurrentPosition(
      (position) => {
        const { latitude, longitude } = position.coords;
        setUserLocation({ lat: latitude, lng: longitude });
        fetchNearbyLocations(latitude, longitude, 10);
      },
      (error) => {
        console.error("Error getting user location:", error);
        setUserLocation(DEFAULT_LOCATION);
        fetchNearbyLocations(
          DEFAULT_LOCATION.lat,
          DEFAULT_LOCATION.lng,
          10
        );
      }
    );
  } else {
    console.error("Geolocation is not supported by this browser.");
    setUserLocation(DEFAULT_LOCATION);
    fetchNearbyLocations(
      DEFAULT_LOCATION.lat,
      DEFAULT_LOCATION.lng,
      10
    );
  }
}, []);

It updates the userLocation to this new location and proceeds to call the fetchNearbyLocations function which uses our API to get all data points in a 10 mile radius from our website. If this is unsuccessful, the system defaults to our DEFAULT_LOCATION in central London.
The user's location is then displayed by creating a marker with a pulsating effect so it can be easily found on the screen. The marker is created with the help of the Google Maps JavaScript API feature Marker. The marker is created with a pulsating effect for maximum visibility.

// Create a pulsating marker for the user's location
const userMarker = new google.maps.Marker({
  position: userLocation,
  map: mapRef.current,
  icon: {
    path: google.maps.SymbolPath.CIRCLE,
    scale: 10,
    fillColor: "#4285F4",
    fillOpacity: 1,
    strokeColor: "#FFFFFF",
    strokeWeight: 2,
  },
  title: "Your Location",
  zIndex: 1000, // Ensure user marker is on top
});

// Add pulsating effect
const pulsate = (marker) => {
  let opacity = 1;
  let increasing = false;
  
  setInterval(() => {
    if (opacity <= 0.5) {
      increasing = true;
    } else if (opacity >= 1) {
      increasing = false;
    }
    
    opacity = increasing ? opacity + 0.01 : opacity - 0.01;
    
    marker.setIcon({
      ...marker.getIcon(),
      fillOpacity: opacity,
    });
  }, 50);
};

Note that the marker is displayed on the GoogleMaps map component of which useEffect also gets triggered upon initialization. The map is, similarly to the user marker, initialized through the Google Maps API feature new google.maps.Map. To the left of the map component, a sidebar with accessibility features is initialized. These features are described in more detail in the sections below.

Data Layers Functionality

Our system organizes accessibility information into intuitive data layers (such as wheelchair services, zebra crossings) that can be toggled to view on the map and clicked for extra information.

Data Layers Initialization

When the useEffect that navigates the user runs, it calls the fetchNearbyLocations. This function is responsible for retrieving the data points from our database, using our API. Upon successful receivement of the data via the API, the data is stored in the local data structure variable nearbyLocations. Before storing into the nearbyLocations state variable, we also conduct filtering of the data into respective data layers. This is done by checking whether the data-point's location.data_layers field contains the zebra_crossings (or any other data layer) string.

const fetchNearbyLocations = async (lat, lng, radius) => {
  setIsLoading(true);
  try {
    const response = await fetch(
      `${API_ENDPOINTS.NEARBY_LOCATIONS}?lat=${lat}&lng=${lng}&radius=${radius}`
    );
    const data = await response.json();
    
    // Process and categorize locations
    const locations = data.locations || [];
    
    // Filter locations into different categories
    const zebraCrossings = locations.filter(location =>
      location.data_layers.some(layer => layer.name === "zebra_crossings")
    );
    
    const wheelchairServices = locations.filter(location =>
      location.data_layers.some(layer => layer.name === "wheelchair_services")
    );
    
    // Update state with all locations and categorized ones
    setNearbyLocations(locations);
    setZebraCrossingLocations(zebraCrossings);
    setWheelchairServiceLocations(wheelchairServices);
    
    // If locations found, center map on first one
    if (locations.length > 0) {
      // Center map logic...
    }
  } catch (error) {
    console.error("Error fetching nearby locations:", error);
    setErrorMessage("Failed to load location data. Please try again later.");
  } finally {
    setIsLoading(false);
  }
};

When the specific data layer is toggled on, the filtering functionality is again conducted on the nearbyLocations so the correct layer is displayed. The markers for this toggled-on data layer are displayed with similar functionality as the user marker, using the Google Maps API with a custom marker icon that we made in Figma. If clicked on, the markers call the createInfoWindowContent function that displays more precise information about that datapoint (also using the what3words API request ti translate that specific location into a what3words address.)

Marker Clustering

An important accessibility - related feature in our system. It's purpose is to prevent displaying all markers all on top of each when the map is zoomed-out. To achieve a more elegant and efficient solution that prevents having to render hundreds of markers a once, we integrated with the MarkerClusterer library, which displays markers in clusters as a function of how zoomed the user is into the map.

// Create marker clusterer to group markers
if (markerClusterer) {
  markerClusterer.clearMarkers();
}

// Add all markers to the clusterer
const markers = [...zebraCrossingMarkers, ...wheelchairServiceMarkers];
markerClusterer = new MarkerClusterer({
  map: mapRef.current,
  markers: markers,
  algorithm: new SuperClusterAlgorithm({
    radius: 100,
    maxZoom: 15,
  }),
  renderer: {
    render: ({ count, position }) => {
      return new google.maps.Marker({
        position,
        label: { text: String(count), color: "#FFFFFF" },
        icon: {
          path: google.maps.SymbolPath.CIRCLE,
          fillColor: "#4285F4",
          fillOpacity: 0.8,
          strokeWeight: 1,
          strokeColor: "#FFFFFF",
          scale: Math.log10(count) * 10 + 15,
        },
        zIndex: Number(google.maps.Marker.MAX_ZINDEX) - count,
      });
    },
  },
});

Accessibility Features

We've built our widget with accessible design principles from the beginning which ensures that the widget can be set to be tailored to a variety of accessibilities. This section will outline the most important accessibility features in more depth. These have been voted by our testers to be most important for the application, and are the following: high-contrast mode, focus mode, and speech interface.
I will also briefly explain other features such as different font styles and sizes.

High-contrast theme (Dark Mode) and yellow theme (Yellow Mode)

We use a colorMode variable in our code to track the current theme, where we allow 3 different color modes to exist, namely type ColorMode = "light" | "dark" | "yellow"; . Note that dark mode is what we call the black-background yellow-text theme. The 3 different modes can be toggled simply by dedicated buttons with clean icons, for example

<button
  onClick={() => setColorMode("dark")}
  className={`flex-1 px-3 py-2 border-x ${
    colorMode === "dark" ? "bg-gray-300 text-black" : ""
  }`}
>
  <MoonIcon className="h-4 w-4 mx-auto" />
</button>

Note that the specific styles for each of the color modes are defined in the mapStyles section in the code, where essentially the original Google Maps map color schemes are manually set to specific values. Specific designs for these were found on online opensource design pages. Note that the different color themes do not only enforce a change in the map style, but also in all other Ui components. To achieve this, each UI component has a conditional code snippet that applies different styles based on the currently-selected theme, see example below.

<div className={`p-4 space-y-4 h-full ${
  colorMode === "dark" ? "bg-gray-800 text-yellow-300 border-r-2 border-gray-700" 
  : colorMode === "yellow" ? "bg-yellow-100 border-yellow-100" 
  : "bg-white border-white"
}`}

Voice Control Integration

Under the map component, the widget has a Siri-lookalike microphone button. In order to attract attention, the button pulsates and turns red upon being pressed. When the button is pressed, we invoke the Microsoft Cognitive Services Speech SDK via their API, where we also set our language preferences. We receive an initialized voice recognizer and synthesizer.

useEffect(() => {
  const subscriptionKey = ${process.env.NEXT_PUBLIC_AZURE_SPEECH_API_KEY};
  const serviceRegion = "uksouth";
  
  try {
    const speechConfig = SpeechSDK.SpeechConfig.fromSubscription(
      subscriptionKey,
      serviceRegion
    );
    speechConfig.speechRecognitionLanguage = "en-GB";
    speechConfig.speechSynthesisLanguage = "en-GB";
    
    const audioConfig = SpeechSDK.AudioConfig.fromDefaultMicrophoneInput();
    const recognizer = new SpeechSDK.SpeechRecognizer(speechConfig, audioConfig);
    const synthesizer = new SpeechSDK.SpeechSynthesizer(speechConfig);
    // ...

The recognizer and synthesizer objects are then used to interact with the user. First, the speak library function is invoked on the syntehsizer which prompts the user - speak("Please say the data layer you want to select."); . Then, the user's voice commands are processed by the recognizer - first put into lowercase and saved as a string. Then, the string gets checked for any occurences of any data layer words, such as zebra crossing, before toggling that data layer on. This functionality is implemented in the processSpeechCommand function. When listening for specific words, such is the case in the listenForNavigationConfirmation function, a custom function is created that again processes the spoken text by putting it into lowercase and then searching for an occurance of a specific word. In the case of waiting for user's confirmation, for example, we allow the words "yes" and "yeah", see code snippet below. We essentially implement the whole speech functionality with these described functions.

const listenForNavigationConfirmation = (destination: LatLng) => {
  //...
  recognizer.recognizeOnceAsync(
    (result) => {
      const transcript = result.text.toLowerCase();
      
      if (transcript.includes("yes") || transcript.includes("yeah")) {
        speak("Opening Google Maps navigation.");
        window.open(createDirectionsLink(destination, userLocation), "_blank");
      } else {
        speak("Navigation cancelled. The location is still visible on your map.");
    // ...

Focus Mode

A very important (as per claims of the members of the visually impaired community) accessibility feature is the focus mode. What focus mode does is essentially dim the screen apart from a small circle around the cursor which is light. If hovering the "focus" over the map component, the component also grows in size for a magnifying glass effect.
More specifically, we first define the stylistic behavior of this feature in our CSS styling file, where parameters such as the --focus-mode-zoom-factor are set.
After toggling the feature on, we have a useEffect that takes care of mouse tracking with the help of the handleMouseMove function.

const handleMouseMove = (e) => {
  // Store cursor coordinates as CSS variables
  document.documentElement.style.setProperty("--mouse-x", `${e.clientX}px`);
  document.documentElement.style.setProperty("--mouse-y", `${e.clientY}px`);
  // ...
}

Then, the code compares whether the mouse coordinates are above the map container space on the screen and magnifies, centred from the location of the mouse. The map is then scaled up by the CSS transform function - mapElement.style.transform = `scale(var(--focus-mode-zoom-factor))`; .

Other Accessibility Features

Similarly to the different themes, the widget has functionality to toggle between different font styles and sizes. Similarly to the theme functionality, the application maintains a global state and then conditionally checks in UI components whether it is necessary to transform the text to be large / dyslexic font. The toggling buttons are similarly to theme changes, implemented in the sidebar.

What3words and Grid Functionality

Our integration with what3words (addressing system that divides the world into 3x3m squares, each with 3 words associated to it) allows users to easily remember any datapoint location as well as talk about it quickly in case of emergency. The grid functionality allows users to see the datapoint displayed spatially.

What3Words Grid Implementation

We have implemented both the standard w3w 3x3m grid and an enhanced 0.5x0.5m grid for more precise spatial information (for potential smaller data points that will be added later).
The widget provides three grid visualization modes: none, 3x3m (standard what3words grid), and 0.5x0.5m (high-precision grid). When a user activates the 3x3m grid toggle, we fetch data from the what3words API to display the lines. The 0.5x0.5m grid functionality then subdivides each what3words square into smaller 0.5m cells for more detailed info, provided the object is stored in our database under 0.5mx0.5m resolution.
When fetching the grid from what3words, we first get the current map boundaries and make an API request. Then, using the Google Maps API, we draw Polyline objects, overlayed on top of the map.

const fetchAndDisplayW3WGrid = () => {
//...
  
  // We format bounding box for API call
  const boundingBox = `${sw.lat()},${sw.lng()},${ne.lat()},${ne.lng()}`;
  
  fetch(`https://api.what3words.com/v3/grid-section?key=${API_KEY}&bounding-box=${boundingBox}`)
    .then(response => response.json())
    .then(data => {
      // Draw grid lines on the map
      data.lines.forEach(line => {
        const polyline = new google.maps.Polyline({
// ...
}

For the high-precision 0.5x0.5m grid, we subdivide each 3x3m square into a 6×6 grid of smaller cells. When a datapoint marker is pressed, additional information, including the coloured grid, is shown. Because what3words API does not support grid coloring, we build our custom highlightLocationGridCells function, as shown below. Again, we use a Google Maps API Rectangle object to place the object in the exact correct place on the Map object. Note that we found a bug within this functionality, which is described in more detailed in Evaluation - Known Bugs.

const highlightLocationGridCells = (location, color) => {
  // Create a rectangle for the grid cell
  const rectangle = new google.maps.Rectangle({
    bounds: new google.maps.LatLngBounds(
      { lat: location.bottom_left_latitude, lng: location.bottom_left_longitude },
      { lat: location.top_right_latitude, lng: location.top_right_longitude }
    ),
    map: mapRef.current,
    fillColor: cellColor,
    // colors, etc...
  });
  return rectangle;
}

Reporting Accessibility Features

Our widget enables users to report accessibility features at specific locations, creating a community-run crowdsourced database of information. The reporting system works through a modal interface where users can select features present at a location and add additional information.

When a user reports accessibility features, the system updates both the UI immediately and sends the new information in the form of reportData to our API. This functionality happens in the handleSubmitFeatures function, which gets invoked after the user clicks on the "Submit" button on the modal. For good UX practices, we also show a "Thank you message" to confirm the update to the user.

const handleSubmitFeatures = async () => {
  // Show thank you message immediately for better UX
  setShowThankYouMessage(true);

  try {
    // Create report data from user selections
    const reportData = {
      wheelchair_ramp: selectedAccessibilityFeatures.includes("wheelchairRamp"),
      sound_zebra_crossing: selectedAccessibilityFeatures.includes("soundZebraCrossing"),
      // Other features...
    };
    setNearbyLocations(prev => {
      return prev.map(location => {
        if (location.id === currentReportLocationId) {
        // ...
        return location;
      });
    });

    // Send to API asynchronously
    fetch(`/v1/locations/${currentReportLocationId}/reports`, {
      method: "PATCH",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify(reportData)
    });
  } // .. Error Handling
}

In order to try and fix the Report bug (Bug 1), which originates from the createInfoWindowContent function (more indepthly described below), we directly modify the infoWindow to display the new features immediately. This essentially just dynamically changes the html in the infoWindow.
When displaying location information, we render accessibility features with intuitive icons and organize them in a clean list format that is displayed in our infoWindow. The information is generated based on the current state of our data points in the nearbyLocations array. For locations without any reported features, we also show a prompt encouraging users to be the first to report.
The infoWindow is created as follows. The location information and the information about the accessibility features at that data point are fetched from the internal data structure (nearbyLocations ) and not fetched from our database API to improve efficiency.This does mean, that if somebody does report some accessibility features during the time we are using the product, we will not see the updates before reloading the page. In the meantime, the what3words API is also invoked (similarly as in the grid section) and provides the what3words tag for that specific location.

const createInfoWindowContent = (location, userLocation) => {
  // Calculate center of the location, caluclate distance form user...
  const center = {
    lat: (location.bottom_left_latitude + location.top_right_latitude) / 2,
    lng: (location.bottom_left_longitude + location.top_right_longitude) / 2,
  };
  // Calculate distance from user
  // ...

  // Create HTML for accessibility features
  let accessibilityFeaturesHTML = "";
  if (location.total_reports) {
    accessibilityFeaturesHTML = `
      // set up HTML...
    `;
    
    // Add icons for each reported feature
    if (location.wheelchair_ramp_reports) 
      accessibilityFeaturesHTML += "
  • ♿ Wheelchair Ramp
  • "; // Add other features... } else { accessibilityFeaturesHTML = `

    No features reported yet. Be the first...

    } // Return the complete HTML content return ` // ... }

    Database Website Implementation

    This section gives a brief overview of the implementation of our features on the frontend database website. As a high-level overview, the database website follows a component-based architecture with a reactive frontend that communicates with the database via a custom API. It manages state efficiently by enabling users to view the database in a table format, add data using a JSON file, and download the database in JSON format, ensuring a seamless and intuitive user experience.

    Frameworks and APIs Used

    The system's whole purpose is for users to interact with the database easily, and provide functions that allows users to easily download and add data into database. That's why we chose a framework with a well-structured component system and a design-focused styling approach to ensure a clean, modern, and user-friendly interface. Note that all the required API keys should be defined in the .env.local file.

    APIs

    Custom Backend API

    Our own API to access our database, also used to report additional accessibility features to the database.

    Frameworks and UI Libraries

    React

    React

    A JavaScript library for building user interfaces, used to create interactive UIs. The project is set up using Create React App, which provides a fast and minimal configuration environment.

    UI Libraries

    Tailwind CSS is used for utility-first styling, while DaisyUI provides pre-built, customizable components for a clean and accessible design.

    API Functions

    This section describes the API functions used for retrieving nearby locations and uploading location data.

    API Function: Fetching Nearby Locations

    The fetchNearbyLocations function retrieves a list of nearby locations based on the provided latitude, longitude, and radius.

    export const fetchNearbyLocations = async (
      latitude: number,
      longitude: number,
      radius: number
    ): Promise => {
      try {
        const response = await fetch(
          `${API_ENDPOINTS.NEARBY_LOCATIONS}?lat=${latitude}&lng=${longitude}&radius=${radius}`
        );
        const data = await response.json();
        return data;
      } catch (error) {
        console.error("Error fetching nearby locations:", error);
        throw error;
      }
    };

    The fetchNearbyLocations function is responsible for fetching a list of locations near a given point. It takes three parameters: latitude, longitude, and radius. It uses an HTTP GET request to the API_ENDPOINTS.NEARBY_LOCATIONS endpoint, which is dynamically constructed with the provided coordinates and radius. If the request is successful, the function returns the data from the response. If there's an error during the request (such as a network issue), the function logs the error and rethrows it to be handled elsewhere in the application.

    API Function: Uploading Location Data

    The uploadLocationData function uploads new location data to the server.

    export const uploadLocationData = async (locationData: any): Promise => {
      try {
        const response = await fetch(API_ENDPOINTS.LOCATIONS, {
          method: "POST",
          headers: getHeaders(),
          body: JSON.stringify(locationData),
        });
        const data = await response.json();
        return data;
      } catch (error) {
        console.error("Error uploading location data:", error);
        throw error;
      }
    };

    The uploadLocationData function is used to send new location data to the server. It accepts locationData as a parameter, which contains the data to be uploaded. The function sends this data through a POST request to the API_ENDPOINTS.LOCATIONS endpoint. The request includes headers obtained from the getHeaders function, which likely contains authentication or authorization information. If the request is successful, the function returns the response data. In case of an error (such as invalid data or server issues), the function logs the error and rethrows it for further handling.

    Database Visualisation and Filtering Functionality

    This section describes how our application handles displaying and filtering location data on our view page.

    Initial Data Fetching

    When the component first loads, it uses the fetchNearbyLocations function to retrieve location data. The initial fetch is configured to get locations around central London (coordinates 51.5074, -0.1278) within a 10-mile radius.

    useEffect(() => {
      const fetchLocations = async () => {
        try {
          setIsLoading(true);
          // Fetch locations around central London
          const data = await fetchNearbyLocations(51.5074, -0.1278, 10);
          setLocations(data.locations || []);
          setExistingLocations(data.locations || []);
        } catch (error) {
          console.error("Error fetching locations:", error);
          setError("Failed to load locations. Please try again later.");
        } finally {
          setIsLoading(false);
        }
      };
    
      fetchLocations();
    }, []);

    The fetched locations are stored in the locations state, which serves as the primary data source for the entire filtering and display mechanism. The component initializes several key filtering states: minReliability and maxReliability: Control the reliability score range selectedLayers: Manage which data layers are selected selectedResolution: Track chosen resolution levels

    Advanced Filtering Mechanism

    The core of the component is its sophisticated filtering logic. The filteredLocations computation applies three critical filtering criteria: Data Layer Matching: Ensures locations include at least one selected layer Reliability Score Filtering: Constrains locations to the specified reliability range Resolution Level Selection: Filters locations by chosen resolution levels

    const filteredLocations = useMemo(() => {
      return locations.filter((location) => {
        // Filter by data layers
        const hasSelectedLayer = selectedLayers.length === 0 || 
          location.data_layers.some(layer => 
            selectedLayers.includes(layer.name)
          );
        
        // Filter by reliability score
        const meetsReliabilityRange = 
          location.reliability_score >= minReliability &&
          location.reliability_score <= maxReliability;
        
        // Filter by resolution
        const hasSelectedResolution = selectedResolution.length === 0 ||
          selectedResolution.includes(location.resolution.toString());
        
        return hasSelectedLayer && meetsReliabilityRange && hasSelectedResolution;
      });
    }, [locations, selectedLayers, minReliability, maxReliability, selectedResolution]);

    The filtering provides granular control through: - Checkboxes for selecting data layers like wheelchair_services and zebra_crossings- Number inputs for setting minimum and maximum reliability scores - Resolution level selection The component dynamically updates the displayed locations based on these filter parameters, providing a responsive and interactive user experience.

    Pagination and Column Customization

    To manage large datasets, the component implements: - Initial display of 50 locations - A "Load More" button to incrementally reveal additional locations - Dynamic column visibility controls allowing users to show/hide specific columns like coordinates, resolution, and reliability scores

    // Pagination state
    const [displayCount, setDisplayCount] = useState(50);
    
    // Column visibility state
    const [showCoordinates, setShowCoordinates] = useState(true);
    const [showResolution, setShowResolution] = useState(true);
    const [showReliability, setShowReliability] = useState(true);
    
    // Load more function
    const handleLoadMore = () => {
      setDisplayCount(prev => prev + 50);
    };
    
    // Display only the paginated subset of filtered locations
    const displayedLocations = filteredLocations.slice(0, displayCount);

    Downloading Database

    This section describes the data download mechanism implemented in the DownloadPage component, which allows users to download location data with various filtering options.

    Data Filtering Mechanism

    The filterFields function acts as a data transformation method, extracting key location attributes: - Bottom-left and top-right coordinates - Resolution - Reliability score - Data layers

    const filterFields = (location) => {
      return {
        bottom_left_latitude: location.bottom_left_latitude,
        bottom_left_longitude: location.bottom_left_longitude,
        top_right_latitude: location.top_right_latitude,
        top_right_longitude: location.top_right_longitude,
        resolution: location.resolution,
        reliability_score: location.reliability_score,
        data_layers: location.data_layers.map(layer => ({
          name: layer.name,
          status: layer.status
        }))
      };
    };

    This function ensures that only essential and relevant information is prepared for download, simplifying the dataset while maintaining its core informative value.

    Flexible Download Functionality

    const downloadFile = async (type = null) => {
      setIsLoading(true);
      try {
        // Fetch locations from API
        const data = await fetchNearbyLocations(51.5074, -0.1278, 10);
        let locations = data.locations || [];
        
        // Filter by type if specified
        if (type === "zebra_crossings") {
          locations = locations.filter(loc => 
            loc.data_layers.some(layer => layer.name === "zebra_crossings")
          );
        } else if (type === "wheelchair_services") {
          locations = locations.filter(loc => 
            loc.data_layers.some(layer => layer.name === "wheelchair_services")
          );
        }
        
        // Map locations to simplified format
        const simplifiedLocations = locations.map(filterFields);
        
        // Create downloadable JSON file
        const jsonString = JSON.stringify(simplifiedLocations, null, 2);
        const blob = new Blob([jsonString], { type: "application/json" });
        const url = URL.createObjectURL(blob);
        
        // Create download link and trigger download
        const a = document.createElement("a");
        a.href = url;
        a.download = `locations${type ? `_${type}` : ""}.json`;
        document.body.appendChild(a);
        a.click();
        document.body.removeChild(a);
        URL.revokeObjectURL(url);
        
        setDownloadSuccess(true);
      } catch (error) {
        console.error("Error downloading file:", error);
        setDownloadError("Failed to download data. Please try again.");
      } finally {
        setIsLoading(false);
      }
    };

    The downloadFile function provides a sophisticated download mechanism with three key download options: 1. Full Dataset Download 2. Zebra Crossings Data Download 3. Wheelchair Services Data Download

    The download process follows these critical steps: - Fetch locations using fetchNearbyLocations- Filter data based on selected type (if specified) - Convert data to JSON format - Create a downloadable Blob object - Generate a temporary download link - Trigger automatic file download

    Data Upload Functionality

    This section describes the comprehensive data upload mechanism for adding new location data to the database using AddDataPage component.

    JSON File Upload Process

    The handleFileUpload function implements a robust JSON file parsing mechanism with multiple layers of validation.

    const handleFileUpload = (event) => {
      const file = event.target.files[0];
      if (!file) return;
    
      const reader = new FileReader();
      reader.onload = (e) => {
        try {
          const jsonData = JSON.parse(e.target.result);
          
          // Validate data structure
          if (!jsonData || !Array.isArray(jsonData) || jsonData.length === 0) {
            setError("Invalid data format. Please upload a valid JSON array.");
            return;
          }
          
          // Check if data has required fields
          const hasRequiredFields = jsonData.every(item => 
            item.bottom_left_latitude !== undefined &&
            item.bottom_left_longitude !== undefined &&
            item.top_right_latitude !== undefined &&
            item.top_right_longitude !== undefined
          );
          
          if (!hasRequiredFields) {
            setError("JSON data is missing required coordinate fields.");
            return;
          }
          
          setUploadedJSON(jsonData);
          setError("");
        } catch (error) {
          console.error("Error parsing JSON:", error);
          setError("Failed to parse JSON file. Please check the file format.");
        }
      };
      
      reader.readAsText(file);
    };

    The handleFileUpload function is triggered when a user uploads a file. It first reads the uploaded file using a FileReader and attempts to parse its content as JSON. If the data is empty, not structured as an array, or doesn't meet the expected format, an error message is displayed to guide the user to upload a valid file. Once valid, the uploaded data is set to the component's state for further processing.

    Duplicate Location Detection

    The isDuplicate function prevents redundant data entry by comparing new locations against existing database entries.

    const isDuplicate = (newLocation) => {
      return existingLocations.some(
        existingLocation =>
          existingLocation.bottom_left_latitude === newLocation.bottom_left_latitude &&
          existingLocation.bottom_left_longitude === newLocation.bottom_left_longitude &&
          existingLocation.top_right_latitude === newLocation.top_right_latitude &&
          existingLocation.top_right_longitude === newLocation.top_right_longitude
      );
    };

    The isDuplicate function checks whether a new location already exists in the database by comparing key location coordinates: bottom-left and top-right latitude/longitude. If any location already exists with the same coordinates, the function returns true, preventing the addition of duplicate entries to the database.

    Data Upload Mechanism

    The handleUploadData function manages the entire upload workflow.

    const handleUploadData = async () => {
      if (!uploadedJSON.length) {
        setError("No data to upload. Please upload a JSON file first.");
        return;
      }
    
      setIsUploading(true);
      setError("");
    
      try {
        // Filter out duplicates
        const uniqueLocations = uploadedJSON.filter(location => !isDuplicate(location));
        
        if (uniqueLocations.length === 0) {
          setMessage("All locations already exist in the database.");
          setIsUploading(false);
          return;
        }
        
        // Format locations for API
        const formattedLocations = uniqueLocations.map(location => ({
          ...location,
          data_layers: location.data_layers || []
        }));
        
        // Upload to server
        await uploadLocationData(formattedLocations);
        
        // Update UI
        setMessage(`Successfully uploaded ${uniqueLocations.length} new locations.`);
        setUploadedJSON([]);
        
        // Refresh existing locations list
        const data = await fetchNearbyLocations(51.5074, -0.1278, 10);
        setExistingLocations(data.locations || []);
      } catch (error) {
        console.error("Error uploading data:", error);
        setError("Failed to upload data. Please try again.");
      } finally {
        setIsUploading(false);
      }
    };

    The handleUploadData function is responsible for managing the entire data upload process. It first validates whether any data is uploaded. Then, it filters out any locations that are already present in the database using the isDuplicate function. The unique locations are then formatted and uploaded to the server. After a successful upload, the component fetches the updated list of existing locations to keep the data synchronized.

    Data Clearing Mechanism

    The handleClear function provides a simple reset mechanism.

    //   const handleClear = () => {
        setUploadedJSON([]);
      };

    The handleClear function allows users to reset the uploaded data by clearing the state that holds the parsed JSON. This function ensures that any data temporarily stored in the component is removed, providing a fresh state for the next operation.

    Backend Implementation

    1. System Overview

    Our backend implementation is built with modern technologies and follows best practices for scalability, maintainability, and security:

    1.1 FastAPI RESTful API

    At the core of our backend is a RESTful API implemented using Python's FastAPI framework. The API is organized into multiple routers, each managing related endpoints. The main application router links these modular routers together to create a unified API.

    from app.api.routes import locations, data_layers, api_keys, users
    
    api_router = APIRouter()
    api_router.include_router(locations.router, prefix="/locations", tags=["locations"])
    api_router.include_router(
        data_layers.router, prefix="/data-layers", tags=["data_layers"]
    )
    api_router.include_router(api_keys.router, prefix="/api_keys", tags=["api_keys"])
    api_router.include_router(users.router, prefix="/users", tags=["users"])

    Each subrouter defines endpoints following RESTful conventions, specifying routes, request types, request bodies (via Pydantic schema classes), and response models. For example:

    @router.post("/", status_code=status.HTTP_201_CREATED, response_model=OutLocationSchema)
    async def create_location(
        location: InLocationSchema,
        db: AsyncSession = Depends(get_db),
        _=Depends(get_current_user),
    ) -> OutLocationSchema:

    FastAPI allows us to specify dependencies for each endpoint, such as database connections or authentication requirements. For rigorous data validation, we use Pydantic schema models:

    class InLocationSchema(BaseSchema):
        bottom_left_latitude: float
        bottom_left_longitude: float
        top_right_latitude: float
        top_right_longitude: float
        resolution: Resolution
        reliability_score: float
        layers: List[str]

    1.2 SQLAlchemy ORM with PostgreSQL

    We use PostgreSQL as our relational database, with tables defined through SQLAlchemy's Object Relational Mapping (ORM). This approach ensures a more rigid database system that's better suited for backend development:

    class User(Base):
        __tablename__ = "users"
    
        id = Column(UUID(as_uuid=True), primary_key=True, index=True)
        name = Column(String, nullable=False, unique=False)
        created_at = Column(DateTime, default=datetime.now())
        updated_at = Column(DateTime, default=datetime.now())
        is_admin = Column(Boolean, default=False, nullable=False)
    
        user_api_keys = relationship(
            "UserApiKey", back_populates="user", cascade="all, delete-orphan"
        )

    One of the biggest benefits of ORM is automatic relationship linking, allowing easy access to related objects through code like user.user_api_keys instead of manual SQL join statements.

    1.3 Alembic for Database Migrations

    For backwards compatibility and change tracking, we use Alembic for database migrations. Through commands like alembic revision --autogenerate "example message", any changes to ORM table classes are detected and migration files are created with upgrade() and downgrade() functions:

    """
    Revision ID: cceaedb570c8
    Revises: e6c91c1ba86f
    Create Date: 2025-01-15 16:26:02.451813
    
    """
    
    from typing import Sequence, Union
    
    from alembic import op
    import sqlalchemy as sa
    
    
    # revision identifiers, used by Alembic.
    revision: str = "cceaedb570c8"
    down_revision: Union[str, None] = "e6c91c1ba86f"
    branch_labels: Union[str, Sequence[str], None] = None
    depends_on: Union[str, Sequence[str], None] = None
    
    
    def upgrade() -> None:
        # ### commands auto generated by Alembic - please adjust! ###
        op.add_column(
            "location", sa.Column("reliability_score", sa.Float(), nullable=False)
        )
        # ### end Alembic commands ###
    
    
    def downgrade() -> None:
        # ### commands auto generated by Alembic - please adjust! ###
        op.drop_column("location", "reliability_score")
        # ### end Alembic commands ###

    The command alembic upgrade head is run on production and local databases to ensure the current schema is up to date with the migration history. The database also has a separate table called alembic_version which tracks all revisions.

    1.4 Repository Pattern for Data Access

    We use the Repository design pattern to manage data access from the database. Each table has its own repository class, which defines operations that can be performed on that table and isolates database interaction from the API:

    class DataLayerRepository(
        BaseRepository[InDataLayerSchema, DataLayerSchema, DataLayer]
    ):
        @property
        def _in_schema(self) -> Type[InDataLayerSchema]:
            return InDataLayerSchema
    
        @property
        def _schema(self) -> Type[DataLayerSchema]:
            return DataLayerSchema
    
        @property
        def _table(self) -> Type[DataLayer]:
            return DataLayer
    
        async def get_by_name(self, entry_name: str) -> DataLayerSchema:
            statement = select(self._table).filter_by(name=entry_name)
            result = await self._db_session.execute(statement)
            entry = result.scalars().first()
            if not entry:
                raise DoesNotExist(
                    f"{self._table.__name__} does not exist"
                )
    
            return self._schema.from_orm(entry)

    Here's an example of how it's used in the API code:

    layers_repository = DataLayerRepository(db)        
    data_layer = await layers_repository.get_by_name(data_layer_name)

    This design pattern makes the code more modular and scalable while minimizing the risk of accidental database changes.

    1.5 Docker Containerization for Deployment

    Our backend code is containerized using Docker, with a Dockerfile specifying how to construct the container. Docker Compose is used to manage multiple containers. Containerization isolates the environment in which the backend runs, making the code system-independent and allowing for easy deployment on Azure via their container registry system.

    2. Key Technical Features

    2.1 Geospatial Data Management

    A key feature of our backend is geospatial data management. We've defined our own system for converting longitude and latitude coordinates into grid systems of different resolutions (3mx3m and 0.5mx0.5m):

    def convert_to_3_grid(latitude, longitude):
        lat_in_meters = latitude * 111320  # 1 degree latitude = 111.32 km
        lon_in_meters = (
            longitude * 40075000 * math.cos(math.radians(latitude)) / 360
        )  # full faith in this https://stackoverflow.com/a/39540339
    
        grid_x = int(lat_in_meters / 3)
        grid_y = int(lon_in_meters / 3)
        return grid_x, grid_y
    
    
    def convert_to_05_grid(latitude, longitude):
        lat_in_meters = latitude * 111320  # 1 degree latitude ~ 111.32 km
        lon_in_meters = (
            longitude * 40075000 * math.cos(math.radians(latitude)) / 360
        )  # full faith in this https://stackoverflow.com/a/39540339
    
        grid_x = int(lat_in_meters / 0.5)
        grid_y = int(lon_in_meters / 0.5)
        return grid_x, grid_y

    The backend also identifies all grid cells that a location falls under and calculates midpoints:

    def calculate_midpoint(bottom_left, top_right):
        mid_latitude = (bottom_left[0] + top_right[0]) / 2
        mid_longitude = (bottom_left[1] + top_right[1]) / 2
        return mid_latitude, mid_longitude
    
    
    def get_covered_grid_cells(bottom_left, top_right, resolution):
        if resolution == 3.0:  # this corresponds to 3.0x3.0
            convert_to_grid = convert_to_3_grid
        elif resolution == 0.5:
            convert_to_grid = convert_to_05_grid
        else:
            raise ValueError("Invalid resolution")
    
        bottom_left_grid = convert_to_grid(bottom_left[0], bottom_left[1])
        top_right_grid = convert_to_grid(top_right[0], top_right[1])
    
        covered_cells = []
        for x in range(bottom_left_grid[0], top_right_grid[0] + 1):
            for y in range(bottom_left_grid[1], top_right_grid[1] + 1):
                covered_cells.append({"grid_x": x, "grid_y": y})
    
        return covered_cells

    2.2 Proximity Search Algorithm

    To optimize API performance, we implemented a proximity search algorithm that fetches only locations within a specified radius of the user:

    async def get_nearby_locations(
            self, latitude: float, longitude: float, radius: float
        ):
            # Convert radius from miles to degrees (approximation)
            radius_in_degrees = radius / 69.0
    
            # Calculate bounding box for the search radius
            min_lat = latitude - radius_in_degrees
            max_lat = latitude + radius_in_degrees
            min_lon = longitude - radius_in_degrees
            max_lon = longitude + radius_in_degrees
    
            query = (
                select(Location)
                .options(
                    joinedload(Location.location_layers).joinedload(
                        LocationLayer.data_layer
                    )
                )
                .where(
                    or_(
                        and_(
                            Location.bottom_left_latitude >= min_lat,
                            Location.bottom_left_latitude <= max_lat,
                            Location.bottom_left_longitude >= min_lon,
                            Location.bottom_left_longitude <= max_lon,
                        ),
                        and_(
                            Location.top_right_latitude >= min_lat,
                            Location.top_right_latitude <= max_lat,
                            Location.top_right_longitude >= min_lon,
                            Location.top_right_longitude <= max_lon,
                        ),
                        and_(
                            Location.bottom_left_latitude <= min_lat,
                            Location.top_right_latitude >= max_lat,
                            Location.bottom_left_longitude <= min_lon,
                            Location.top_right_longitude >= max_lon,
                        ),
                    )
                )
            )

    3. API Key Authentication System

    Our backend includes a robust security layer to protect endpoints from unauthorized access:

    3.1 Admin Endpoints

    We have admin-only endpoints for creating users and issuing API keys:

    @router.post("/admin/users", response_model=UserSchema)
    async def admin_create_user(
        name: str = Body(..., embed=True),
        is_admin: bool = Body(False, embed=True),
        db: AsyncSession = Depends(get_db),
        _: Any = Depends(admin_required) 
    ):
        """Admin only endpoint to create a new user"""
        
        user_repo = UserRepository(db)
        
        now = datetime.now()
        user_data = {
            "name": name,
            "is_admin": is_admin,
            "created_at": now,
            "updated_at": now
        }
        
        new_user = await user_repo.create(user_data)
        return new_user
    @router.post("/admin/users/{user_id}/api-keys", response_model=Dict[str, Any])
    async def admin_create_api_key(
        user_id: UUID,
        name: str = Body("Default API Key", embed=True),
        db: AsyncSession = Depends(get_db),
        _: Any = Depends(admin_required)  # verify admin access
    ):
        """admin only endpoint to create an API key for any user"""
        
        user_repo = UserRepository(db)
        try: 
            user = await user_repo.get_by_id(user_id)
        except DoesNotExist: 
            raise HTTPException(status_code=404, detail="User not found")
        
        # generate API key
        api_key = generate_api_key()
        hashed_key = hash_api_key(api_key)
        
        # add it to db
        api_key_repo = UserApiKeyRepository(db)
        api_key_data = {
            "name": name, 
            "user_id": user_id,
            "hashed_key": hashed_key,
            "is_active": True
        }
        
        created_key = await api_key_repo.create(api_key_data)
        
        # the only time the key will be returned
        return {
            "id": created_key.id,
            "name": created_key.name,
            "user_id": created_key.user_id,
            "api_key": api_key,  # the unhashed key - only shown once
            "created_at": created_key.created_at
        }

    3.2 Authentication Dependencies

    All endpoints use authentication dependencies to verify API keys:

    API_KEY_HEADER = APIKeyHeader(name="X-API-Key", auto_error=False) 
    
    async def get_current_user_id(
        api_key: Optional[str] = Depends(API_KEY_HEADER),
        db: AsyncSession = Depends(get_db)
    ) -> UUID:
        """
        Dependency to validate API key and return the user_id
        """
        if not api_key:
            raise HTTPException(
                status_code=status.HTTP_401_UNAUTHORIZED,
                detail="Missing API key",
                headers={"WWW-Authenticate": "ApiKey"},
            )
        
        user_id = await verify_api_key(db, api_key)
        
        if not user_id:
            raise HTTPException(
                status_code=status.HTTP_401_UNAUTHORIZED,
                detail="Invalid API key",
                headers={"WWW-Authenticate": "ApiKey"},
            )
        
        return user_id
    
    async def get_current_user(
        user_id: UUID = Depends(get_current_user_id),
        db: AsyncSession = Depends(get_db)
    ):
        """Get the current user based on API key authentication"""
        user_repo = UserRepository(db)
        user = await user_repo.get_by_id(user_id)
        
        if not user:
            raise HTTPException(
                status_code=status.HTTP_401_UNAUTHORIZED,
                detail="User not found",
                headers={"WWW-Authenticate": "ApiKey"},
            )
        
        return user

    The admin_required dependency adds an additional security layer:

    async def admin_required(user = Depends(get_current_user)):
        if not user.is_admin:
            raise HTTPException(
                status_code=status.HTTP_403_FORBIDDEN,
                detail="Admin privileges required",
            )
        return user

    3.3 API Key Service

    API key operations are isolated in a dedicated service class:

    class ApiKeyService:
        def __init__(self, db_session: AsyncSession):
            self.repository = UserApiKeyRepository(db_session)
        
        def create_api_key(self, user_id: UUID, name: str) -> Dict[str, Any]:
            api_key = generate_api_key()
            
            hashed_key = hash_api_key(api_key)
            
            api_key_data = ApiKeyCreate(
                id=uuid.uuid4(),
                name=name,
                user_id=user_id,
                hashed_key=hashed_key
            )
            
            db_api_key = self.repository.create(obj_in=api_key_data)
            
            return {
                "id": db_api_key.id,
                "name": db_api_key.name,
                "api_key": api_key,  # actual key - shown only once
                "created_at": db_api_key.created_at
            }
        
        def list_user_api_keys(self, user_id: UUID) -> list:
            keys = self.repository.get_by_user_id(user_id=user_id)
            return keys
        
        def revoke_api_key(self, key_id: UUID, user_id: UUID) -> bool:
            key = self.repository.get(id=key_id)
            
            if not key or key.user_id != user_id:
                return False
                
            self.repository.revoke_key(key_id)
            return True

    For security, API keys are hashed before storage and verified using:

    async def verify_api_key(db: AsyncSession, api_key: str) -> Optional[UUID]:
        api_key_repo = UserApiKeyRepository(db)
        
        hashed_key = hash_api_key(api_key)
        
        api_key_obj = await api_key_repo.get_by_hashed_key(hashed_key=hashed_key)
        
        if not api_key_obj or not api_key_obj.is_active:
            return None
            
        return api_key_obj.user_id

    To efficiently handle the many-to-many relationship between data layers and locations, we created a separate junction table called location_layers. This table stores relationships between location entities and data layer entities with just two foreign key fields: location_id and data_layer_id.

    We added two additional fields to this table:

    • status: An enum with possible values temporary and permanent
    • expires_at: For temporary data points like potholes or construction

    5. Miscellaneous

    5.1 Makefile

    We use a Makefile to streamline development workflows:

    backend-local: down remove-volume
    	ENV_FILE=.env.local docker-compose up --build -d
    	docker-compose exec app alembic upgrade head

    5.2 Poetry for Dependency Management

    We use Poetry to manage package dependencies in an isolated virtual environment:

    [tool.poetry.dependencies]
    python = "^3.13"
    fastapi = "^0.115.6"
    SQLAlchemy = "^2.0.36"
    uvicorn = "^0.34.0"
    asyncpg = "^0.30.0"
    alembic = "^1.14.0"
    psycopg2 = "^2.9.10"
    pydantic-settings = "^2.7.0"
    greenlet = "^3.1.1"
    fastapi-cors = "^0.0.6"
    pytest = "^8.3.5"
    pytest-asyncio = "^0.25.3"
    gunicorn = "^23.0.0"
    passlib = "^1.7.4"
    bcrypt = "^4.3.0"

    5.3 Configuration Management

    We use a configuration system to handle different environments (production, local):

    class GlobalConfig(BaseSettings):
        TITLE: str = "Data layers for accessibility"
        DESCRIPTION: str = (
            "This project will provide various outdoor object data layers for accessability purposes."
        )
    
        ENVIRONMENT: EnvironmentEnum
        DEBUG: bool = False
        TESTING: bool = False
        TIMEZONE: str = "UTC"
    
        DATABASE_URL: Optional[str] = (
            "postgresql://postgres:postgres@127.0.0.1:5432/postgres"
        )
        DB_NAME: str = "Data Layers for Accessiblity"
        DB_ECHO_LOG: bool = False
    
        @property
        def async_database_url(self) -> Optional[str]:
            return (
                self.DATABASE_URL.replace("postgresql://", "postgresql+asyncpg://")
                if self.DATABASE_URL
                else self.DATABASE_URL
            )
    
        # Api V1 prefix
        API_V1_STR: str = "/v1"