This section gives a brief overview of the implementation of our features on the frontend database website. As a high-level overview, the database website follows a component-based architecture with a reactive frontend that communicates with the database via a custom API. It manages state efficiently by enabling users to view the database in a table format, add data using a JSON file, and download the database in JSON format, ensuring a seamless and intuitive user experience.
The system's whole purpose is for users to interact with the database
easily, and provide functions that allows users to easily download and
add data into database. That's why we chose a framework with a
well-structured component system and a design-focused styling approach
to ensure a clean, modern, and user-friendly interface. Note that all
the required API keys should be defined in the
.env.local file.
Our own API to access our database, also used to report additional accessibility features to the database.
A JavaScript library for building user interfaces, used to create interactive UIs. The project is set up using Create React App, which provides a fast and minimal configuration environment.
Tailwind CSS is used for utility-first styling, while DaisyUI provides pre-built, customizable components for a clean and accessible design.
This section describes the API functions used for retrieving nearby locations and uploading location data.
The fetchNearbyLocations function retrieves a
list of nearby locations based on the provided latitude, longitude,
and radius.
export const fetchNearbyLocations = async (
latitude: number,
longitude: number,
radius: number
): Promise => {
try {
const response = await fetch(
`${API_ENDPOINTS.NEARBY_LOCATIONS}?lat=${latitude}&lng=${longitude}&radius=${radius}`
);
const data = await response.json();
return data;
} catch (error) {
console.error("Error fetching nearby locations:", error);
throw error;
}
};
The fetchNearbyLocations function is
responsible for fetching a list of locations near a given point. It
takes three parameters: latitude,
longitude, and
radius. It uses an HTTP
GET request to the
API_ENDPOINTS.NEARBY_LOCATIONS endpoint,
which is dynamically constructed with the provided coordinates and
radius. If the request is successful, the function returns the data
from the response. If there's an error during the request (such as a
network issue), the function logs the error and rethrows it to be
handled elsewhere in the application.
The uploadLocationData function uploads new
location data to the server.
export const uploadLocationData = async (locationData: any): Promise => {
try {
const response = await fetch(API_ENDPOINTS.LOCATIONS, {
method: "POST",
headers: getHeaders(),
body: JSON.stringify(locationData),
});
const data = await response.json();
return data;
} catch (error) {
console.error("Error uploading location data:", error);
throw error;
}
};
The uploadLocationData function is used to
send new location data to the server. It accepts
locationData as a parameter, which contains
the data to be uploaded. The function sends this data through a
POST request to the
API_ENDPOINTS.LOCATIONS endpoint. The request
includes headers obtained from the getHeaders
function, which likely contains authentication or authorization
information. If the request is successful, the function returns the
response data. In case of an error (such as invalid data or server
issues), the function logs the error and rethrows it for further
handling.
This section describes how our application handles displaying and filtering location data on our view page.
When the component first loads, it uses the
fetchNearbyLocations function to retrieve
location data. The initial fetch is configured to get locations around
central London (coordinates 51.5074, -0.1278) within a 10-mile radius.
useEffect(() => {
const fetchLocations = async () => {
try {
setIsLoading(true);
// Fetch locations around central London
const data = await fetchNearbyLocations(51.5074, -0.1278, 10);
setLocations(data.locations || []);
setExistingLocations(data.locations || []);
} catch (error) {
console.error("Error fetching locations:", error);
setError("Failed to load locations. Please try again later.");
} finally {
setIsLoading(false);
}
};
fetchLocations();
}, []);
The fetched locations are stored in the
locations state, which serves as the primary
data source for the entire filtering and display mechanism. The
component initializes several key filtering states:
minReliability and
maxReliability: Control the reliability score
range
selectedLayers: Manage which data layers are
selected
selectedResolution: Track chosen resolution
levels
The core of the component is its sophisticated filtering logic. The
filteredLocations computation applies three
critical filtering criteria: Data Layer Matching: Ensures locations
include at least one selected layer Reliability Score Filtering:
Constrains locations to the specified reliability range Resolution
Level Selection: Filters locations by chosen resolution levels
const filteredLocations = useMemo(() => {
return locations.filter((location) => {
// Filter by data layers
const hasSelectedLayer = selectedLayers.length === 0 ||
location.data_layers.some(layer =>
selectedLayers.includes(layer.name)
);
// Filter by reliability score
const meetsReliabilityRange =
location.reliability_score >= minReliability &&
location.reliability_score <= maxReliability;
// Filter by resolution
const hasSelectedResolution = selectedResolution.length === 0 ||
selectedResolution.includes(location.resolution.toString());
return hasSelectedLayer && meetsReliabilityRange && hasSelectedResolution;
});
}, [locations, selectedLayers, minReliability, maxReliability, selectedResolution]);
The filtering provides granular control through: - Checkboxes for
selecting data layers like
wheelchair_services and
zebra_crossings- Number inputs for setting
minimum and maximum reliability scores - Resolution level selection
The component dynamically updates the displayed locations based on
these filter parameters, providing a responsive and interactive user
experience.
To manage large datasets, the component implements: - Initial display of 50 locations - A "Load More" button to incrementally reveal additional locations - Dynamic column visibility controls allowing users to show/hide specific columns like coordinates, resolution, and reliability scores
// Pagination state
const [displayCount, setDisplayCount] = useState(50);
// Column visibility state
const [showCoordinates, setShowCoordinates] = useState(true);
const [showResolution, setShowResolution] = useState(true);
const [showReliability, setShowReliability] = useState(true);
// Load more function
const handleLoadMore = () => {
setDisplayCount(prev => prev + 50);
};
// Display only the paginated subset of filtered locations
const displayedLocations = filteredLocations.slice(0, displayCount);
This section describes the data download mechanism implemented in the DownloadPage component, which allows users to download location data with various filtering options.
The filterFields function acts as a data
transformation method, extracting key location attributes: -
Bottom-left and top-right coordinates - Resolution - Reliability score
- Data layers
const filterFields = (location) => {
return {
bottom_left_latitude: location.bottom_left_latitude,
bottom_left_longitude: location.bottom_left_longitude,
top_right_latitude: location.top_right_latitude,
top_right_longitude: location.top_right_longitude,
resolution: location.resolution,
reliability_score: location.reliability_score,
data_layers: location.data_layers.map(layer => ({
name: layer.name,
status: layer.status
}))
};
};
This function ensures that only essential and relevant information is prepared for download, simplifying the dataset while maintaining its core informative value.
const downloadFile = async (type = null) => {
setIsLoading(true);
try {
// Fetch locations from API
const data = await fetchNearbyLocations(51.5074, -0.1278, 10);
let locations = data.locations || [];
// Filter by type if specified
if (type === "zebra_crossings") {
locations = locations.filter(loc =>
loc.data_layers.some(layer => layer.name === "zebra_crossings")
);
} else if (type === "wheelchair_services") {
locations = locations.filter(loc =>
loc.data_layers.some(layer => layer.name === "wheelchair_services")
);
}
// Map locations to simplified format
const simplifiedLocations = locations.map(filterFields);
// Create downloadable JSON file
const jsonString = JSON.stringify(simplifiedLocations, null, 2);
const blob = new Blob([jsonString], { type: "application/json" });
const url = URL.createObjectURL(blob);
// Create download link and trigger download
const a = document.createElement("a");
a.href = url;
a.download = `locations${type ? `_${type}` : ""}.json`;
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
URL.revokeObjectURL(url);
setDownloadSuccess(true);
} catch (error) {
console.error("Error downloading file:", error);
setDownloadError("Failed to download data. Please try again.");
} finally {
setIsLoading(false);
}
};
The downloadFile function provides a
sophisticated download mechanism with three key download options: 1.
Full Dataset Download 2. Zebra Crossings Data Download 3. Wheelchair
Services Data Download
The download process follows these critical steps: - Fetch locations
using fetchNearbyLocations- Filter data based
on selected type (if specified) - Convert data to JSON format - Create
a downloadable Blob object - Generate a temporary download link -
Trigger automatic file download
This section describes the comprehensive data upload mechanism for adding new location data to the database using AddDataPage component.
The handleFileUpload function implements a
robust JSON file parsing mechanism with multiple layers of validation.
const handleFileUpload = (event) => {
const file = event.target.files[0];
if (!file) return;
const reader = new FileReader();
reader.onload = (e) => {
try {
const jsonData = JSON.parse(e.target.result);
// Validate data structure
if (!jsonData || !Array.isArray(jsonData) || jsonData.length === 0) {
setError("Invalid data format. Please upload a valid JSON array.");
return;
}
// Check if data has required fields
const hasRequiredFields = jsonData.every(item =>
item.bottom_left_latitude !== undefined &&
item.bottom_left_longitude !== undefined &&
item.top_right_latitude !== undefined &&
item.top_right_longitude !== undefined
);
if (!hasRequiredFields) {
setError("JSON data is missing required coordinate fields.");
return;
}
setUploadedJSON(jsonData);
setError("");
} catch (error) {
console.error("Error parsing JSON:", error);
setError("Failed to parse JSON file. Please check the file format.");
}
};
reader.readAsText(file);
};
The handleFileUpload function is triggered
when a user uploads a file. It first reads the uploaded file using a
FileReader and attempts to parse its content
as JSON. If the data is empty, not structured as an array, or doesn't
meet the expected format, an error message is displayed to guide the
user to upload a valid file. Once valid, the uploaded data is set to
the component's state for further processing.
The isDuplicate function prevents redundant
data entry by comparing new locations against existing database
entries.
const isDuplicate = (newLocation) => {
return existingLocations.some(
existingLocation =>
existingLocation.bottom_left_latitude === newLocation.bottom_left_latitude &&
existingLocation.bottom_left_longitude === newLocation.bottom_left_longitude &&
existingLocation.top_right_latitude === newLocation.top_right_latitude &&
existingLocation.top_right_longitude === newLocation.top_right_longitude
);
};
The isDuplicate function checks whether a new
location already exists in the database by comparing key location
coordinates: bottom-left and top-right latitude/longitude. If any
location already exists with the same coordinates, the function
returns true, preventing the addition of
duplicate entries to the database.
The handleUploadData function manages the
entire upload workflow.
const handleUploadData = async () => {
if (!uploadedJSON.length) {
setError("No data to upload. Please upload a JSON file first.");
return;
}
setIsUploading(true);
setError("");
try {
// Filter out duplicates
const uniqueLocations = uploadedJSON.filter(location => !isDuplicate(location));
if (uniqueLocations.length === 0) {
setMessage("All locations already exist in the database.");
setIsUploading(false);
return;
}
// Format locations for API
const formattedLocations = uniqueLocations.map(location => ({
...location,
data_layers: location.data_layers || []
}));
// Upload to server
await uploadLocationData(formattedLocations);
// Update UI
setMessage(`Successfully uploaded ${uniqueLocations.length} new locations.`);
setUploadedJSON([]);
// Refresh existing locations list
const data = await fetchNearbyLocations(51.5074, -0.1278, 10);
setExistingLocations(data.locations || []);
} catch (error) {
console.error("Error uploading data:", error);
setError("Failed to upload data. Please try again.");
} finally {
setIsUploading(false);
}
};
The handleUploadData function is responsible
for managing the entire data upload process. It first validates
whether any data is uploaded. Then, it filters out any locations that
are already present in the database using the
isDuplicate function. The unique locations
are then formatted and uploaded to the server. After a successful
upload, the component fetches the updated list of existing locations
to keep the data synchronized.
The handleClear function provides a simple
reset mechanism.
// const handleClear = () => {
setUploadedJSON([]);
};
The handleClear function allows users to
reset the uploaded data by clearing the state that holds the parsed
JSON. This function ensures that any data temporarily stored in the
component is removed, providing a fresh state for the next operation.
Our backend implementation is built with modern technologies and follows best practices for scalability, maintainability, and security:
At the core of our backend is a RESTful API implemented using Python's FastAPI framework. The API is organized into multiple routers, each managing related endpoints. The main application router links these modular routers together to create a unified API.
from app.api.routes import locations, data_layers, api_keys, users
api_router = APIRouter()
api_router.include_router(locations.router, prefix="/locations", tags=["locations"])
api_router.include_router(
data_layers.router, prefix="/data-layers", tags=["data_layers"]
)
api_router.include_router(api_keys.router, prefix="/api_keys", tags=["api_keys"])
api_router.include_router(users.router, prefix="/users", tags=["users"])
Each subrouter defines endpoints following RESTful conventions, specifying routes, request types, request bodies (via Pydantic schema classes), and response models. For example:
@router.post("/", status_code=status.HTTP_201_CREATED, response_model=OutLocationSchema)
async def create_location(
location: InLocationSchema,
db: AsyncSession = Depends(get_db),
_=Depends(get_current_user),
) -> OutLocationSchema:
FastAPI allows us to specify dependencies for each endpoint, such as database connections or authentication requirements. For rigorous data validation, we use Pydantic schema models:
class InLocationSchema(BaseSchema):
bottom_left_latitude: float
bottom_left_longitude: float
top_right_latitude: float
top_right_longitude: float
resolution: Resolution
reliability_score: float
layers: List[str]
We use PostgreSQL as our relational database, with tables defined through SQLAlchemy's Object Relational Mapping (ORM). This approach ensures a more rigid database system that's better suited for backend development:
class User(Base):
__tablename__ = "users"
id = Column(UUID(as_uuid=True), primary_key=True, index=True)
name = Column(String, nullable=False, unique=False)
created_at = Column(DateTime, default=datetime.now())
updated_at = Column(DateTime, default=datetime.now())
is_admin = Column(Boolean, default=False, nullable=False)
user_api_keys = relationship(
"UserApiKey", back_populates="user", cascade="all, delete-orphan"
)
One of the biggest benefits of ORM is automatic relationship linking, allowing easy access to related objects through code like user.user_api_keys instead of manual SQL join statements.
For backwards compatibility and change tracking, we use Alembic for database migrations. Through commands like alembic revision --autogenerate "example message", any changes to ORM table classes are detected and migration files are created with upgrade() and downgrade() functions:
"""
Revision ID: cceaedb570c8
Revises: e6c91c1ba86f
Create Date: 2025-01-15 16:26:02.451813
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = "cceaedb570c8"
down_revision: Union[str, None] = "e6c91c1ba86f"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column(
"location", sa.Column("reliability_score", sa.Float(), nullable=False)
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column("location", "reliability_score")
# ### end Alembic commands ###
The command alembic upgrade head is run on production and local databases to ensure the current schema is up to date with the migration history. The database also has a separate table called alembic_version which tracks all revisions.
We use the Repository design pattern to manage data access from the database. Each table has its own repository class, which defines operations that can be performed on that table and isolates database interaction from the API:
class DataLayerRepository(
BaseRepository[InDataLayerSchema, DataLayerSchema, DataLayer]
):
@property
def _in_schema(self) -> Type[InDataLayerSchema]:
return InDataLayerSchema
@property
def _schema(self) -> Type[DataLayerSchema]:
return DataLayerSchema
@property
def _table(self) -> Type[DataLayer]:
return DataLayer
async def get_by_name(self, entry_name: str) -> DataLayerSchema:
statement = select(self._table).filter_by(name=entry_name)
result = await self._db_session.execute(statement)
entry = result.scalars().first()
if not entry:
raise DoesNotExist(
f"{self._table.__name__} does not exist"
)
return self._schema.from_orm(entry)
Here's an example of how it's used in the API code:
layers_repository = DataLayerRepository(db)
data_layer = await layers_repository.get_by_name(data_layer_name)
This design pattern makes the code more modular and scalable while minimizing the risk of accidental database changes.
Our backend code is containerized using Docker, with a Dockerfile specifying how to construct the container. Docker Compose is used to manage multiple containers. Containerization isolates the environment in which the backend runs, making the code system-independent and allowing for easy deployment on Azure via their container registry system.
A key feature of our backend is geospatial data management. We've defined our own system for converting longitude and latitude coordinates into grid systems of different resolutions (3mx3m and 0.5mx0.5m):
def convert_to_3_grid(latitude, longitude):
lat_in_meters = latitude * 111320 # 1 degree latitude = 111.32 km
lon_in_meters = (
longitude * 40075000 * math.cos(math.radians(latitude)) / 360
) # full faith in this https://stackoverflow.com/a/39540339
grid_x = int(lat_in_meters / 3)
grid_y = int(lon_in_meters / 3)
return grid_x, grid_y
def convert_to_05_grid(latitude, longitude):
lat_in_meters = latitude * 111320 # 1 degree latitude ~ 111.32 km
lon_in_meters = (
longitude * 40075000 * math.cos(math.radians(latitude)) / 360
) # full faith in this https://stackoverflow.com/a/39540339
grid_x = int(lat_in_meters / 0.5)
grid_y = int(lon_in_meters / 0.5)
return grid_x, grid_y
The backend also identifies all grid cells that a location falls under and calculates midpoints:
def calculate_midpoint(bottom_left, top_right):
mid_latitude = (bottom_left[0] + top_right[0]) / 2
mid_longitude = (bottom_left[1] + top_right[1]) / 2
return mid_latitude, mid_longitude
def get_covered_grid_cells(bottom_left, top_right, resolution):
if resolution == 3.0: # this corresponds to 3.0x3.0
convert_to_grid = convert_to_3_grid
elif resolution == 0.5:
convert_to_grid = convert_to_05_grid
else:
raise ValueError("Invalid resolution")
bottom_left_grid = convert_to_grid(bottom_left[0], bottom_left[1])
top_right_grid = convert_to_grid(top_right[0], top_right[1])
covered_cells = []
for x in range(bottom_left_grid[0], top_right_grid[0] + 1):
for y in range(bottom_left_grid[1], top_right_grid[1] + 1):
covered_cells.append({"grid_x": x, "grid_y": y})
return covered_cells
To optimize API performance, we implemented a proximity search algorithm that fetches only locations within a specified radius of the user:
async def get_nearby_locations(
self, latitude: float, longitude: float, radius: float
):
# Convert radius from miles to degrees (approximation)
radius_in_degrees = radius / 69.0
# Calculate bounding box for the search radius
min_lat = latitude - radius_in_degrees
max_lat = latitude + radius_in_degrees
min_lon = longitude - radius_in_degrees
max_lon = longitude + radius_in_degrees
query = (
select(Location)
.options(
joinedload(Location.location_layers).joinedload(
LocationLayer.data_layer
)
)
.where(
or_(
and_(
Location.bottom_left_latitude >= min_lat,
Location.bottom_left_latitude <= max_lat,
Location.bottom_left_longitude >= min_lon,
Location.bottom_left_longitude <= max_lon,
),
and_(
Location.top_right_latitude >= min_lat,
Location.top_right_latitude <= max_lat,
Location.top_right_longitude >= min_lon,
Location.top_right_longitude <= max_lon,
),
and_(
Location.bottom_left_latitude <= min_lat,
Location.top_right_latitude >= max_lat,
Location.bottom_left_longitude <= min_lon,
Location.top_right_longitude >= max_lon,
),
)
)
)
Our backend includes a robust security layer to protect endpoints from unauthorized access:
We have admin-only endpoints for creating users and issuing API keys:
@router.post("/admin/users", response_model=UserSchema)
async def admin_create_user(
name: str = Body(..., embed=True),
is_admin: bool = Body(False, embed=True),
db: AsyncSession = Depends(get_db),
_: Any = Depends(admin_required)
):
"""Admin only endpoint to create a new user"""
user_repo = UserRepository(db)
now = datetime.now()
user_data = {
"name": name,
"is_admin": is_admin,
"created_at": now,
"updated_at": now
}
new_user = await user_repo.create(user_data)
return new_user
@router.post("/admin/users/{user_id}/api-keys", response_model=Dict[str, Any])
async def admin_create_api_key(
user_id: UUID,
name: str = Body("Default API Key", embed=True),
db: AsyncSession = Depends(get_db),
_: Any = Depends(admin_required) # verify admin access
):
"""admin only endpoint to create an API key for any user"""
user_repo = UserRepository(db)
try:
user = await user_repo.get_by_id(user_id)
except DoesNotExist:
raise HTTPException(status_code=404, detail="User not found")
# generate API key
api_key = generate_api_key()
hashed_key = hash_api_key(api_key)
# add it to db
api_key_repo = UserApiKeyRepository(db)
api_key_data = {
"name": name,
"user_id": user_id,
"hashed_key": hashed_key,
"is_active": True
}
created_key = await api_key_repo.create(api_key_data)
# the only time the key will be returned
return {
"id": created_key.id,
"name": created_key.name,
"user_id": created_key.user_id,
"api_key": api_key, # the unhashed key - only shown once
"created_at": created_key.created_at
}
All endpoints use authentication dependencies to verify API keys:
API_KEY_HEADER = APIKeyHeader(name="X-API-Key", auto_error=False)
async def get_current_user_id(
api_key: Optional[str] = Depends(API_KEY_HEADER),
db: AsyncSession = Depends(get_db)
) -> UUID:
"""
Dependency to validate API key and return the user_id
"""
if not api_key:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Missing API key",
headers={"WWW-Authenticate": "ApiKey"},
)
user_id = await verify_api_key(db, api_key)
if not user_id:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid API key",
headers={"WWW-Authenticate": "ApiKey"},
)
return user_id
async def get_current_user(
user_id: UUID = Depends(get_current_user_id),
db: AsyncSession = Depends(get_db)
):
"""Get the current user based on API key authentication"""
user_repo = UserRepository(db)
user = await user_repo.get_by_id(user_id)
if not user:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="User not found",
headers={"WWW-Authenticate": "ApiKey"},
)
return user
The admin_required dependency adds an additional security layer:
async def admin_required(user = Depends(get_current_user)):
if not user.is_admin:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Admin privileges required",
)
return user
API key operations are isolated in a dedicated service class:
class ApiKeyService:
def __init__(self, db_session: AsyncSession):
self.repository = UserApiKeyRepository(db_session)
def create_api_key(self, user_id: UUID, name: str) -> Dict[str, Any]:
api_key = generate_api_key()
hashed_key = hash_api_key(api_key)
api_key_data = ApiKeyCreate(
id=uuid.uuid4(),
name=name,
user_id=user_id,
hashed_key=hashed_key
)
db_api_key = self.repository.create(obj_in=api_key_data)
return {
"id": db_api_key.id,
"name": db_api_key.name,
"api_key": api_key, # actual key - shown only once
"created_at": db_api_key.created_at
}
def list_user_api_keys(self, user_id: UUID) -> list:
keys = self.repository.get_by_user_id(user_id=user_id)
return keys
def revoke_api_key(self, key_id: UUID, user_id: UUID) -> bool:
key = self.repository.get(id=key_id)
if not key or key.user_id != user_id:
return False
self.repository.revoke_key(key_id)
return True
For security, API keys are hashed before storage and verified using:
async def verify_api_key(db: AsyncSession, api_key: str) -> Optional[UUID]:
api_key_repo = UserApiKeyRepository(db)
hashed_key = hash_api_key(api_key)
api_key_obj = await api_key_repo.get_by_hashed_key(hashed_key=hashed_key)
if not api_key_obj or not api_key_obj.is_active:
return None
return api_key_obj.user_id
To efficiently handle the many-to-many relationship between data layers and locations, we created a separate junction table called location_layers. This table stores relationships between location entities and data layer entities with just two foreign key fields: location_id and data_layer_id.
We added two additional fields to this table:
status: An enum with possible values temporary and permanentexpires_at: For temporary data points like potholes or constructionWe use a Makefile to streamline development workflows:
backend-local: down remove-volume
ENV_FILE=.env.local docker-compose up --build -d
docker-compose exec app alembic upgrade head
We use Poetry to manage package dependencies in an isolated virtual environment:
[tool.poetry.dependencies]
python = "^3.13"
fastapi = "^0.115.6"
SQLAlchemy = "^2.0.36"
uvicorn = "^0.34.0"
asyncpg = "^0.30.0"
alembic = "^1.14.0"
psycopg2 = "^2.9.10"
pydantic-settings = "^2.7.0"
greenlet = "^3.1.1"
fastapi-cors = "^0.0.6"
pytest = "^8.3.5"
pytest-asyncio = "^0.25.3"
gunicorn = "^23.0.0"
passlib = "^1.7.4"
bcrypt = "^4.3.0"
We use a configuration system to handle different environments (production, local):
class GlobalConfig(BaseSettings):
TITLE: str = "Data layers for accessibility"
DESCRIPTION: str = (
"This project will provide various outdoor object data layers for accessability purposes."
)
ENVIRONMENT: EnvironmentEnum
DEBUG: bool = False
TESTING: bool = False
TIMEZONE: str = "UTC"
DATABASE_URL: Optional[str] = (
"postgresql://postgres:postgres@127.0.0.1:5432/postgres"
)
DB_NAME: str = "Data Layers for Accessiblity"
DB_ECHO_LOG: bool = False
@property
def async_database_url(self) -> Optional[str]:
return (
self.DATABASE_URL.replace("postgresql://", "postgresql+asyncpg://")
if self.DATABASE_URL
else self.DATABASE_URL
)
# Api V1 prefix
API_V1_STR: str = "/v1"