diff --git a/RELEASE.md b/RELEASE.md index ca510aa..e9c1027 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -1,5 +1,9 @@ # Release Notes +## v1.5.11 + +- Add ALPRDemo to deployment-examples + ## v1.5.10 - Update SIO to r240909 diff --git a/VERSION b/VERSION index 341724d..bbb1b25 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -v1.5.10 +v1.5.11 diff --git a/deployment-examples/ALPRDemo/README.md b/deployment-examples/ALPRDemo/README.md new file mode 100644 index 0000000..2d9de28 --- /dev/null +++ b/deployment-examples/ALPRDemo/README.md @@ -0,0 +1,62 @@ +# ALPR Demo Application + +A full featured demo, with SIO monitoring one or more video streams, AMQP client consuming the data from one or more instances, and front-end REST API making data available for searching past events. + +Contact [support@sighthound.com](mailto:support@sighthound.com) with any questions, and visit our [Developer Portal](https://dev.sighthound.com) for more information. + + +## Components and containers + +* live555_svc + * Demo-specific container, serving two pre-recorded files over RTSP to emulate live video + * Can be disabled or removed, if SIO container configuration is modified to point to alternative live streams + +* analytics_svc + * SIO container is the brains of the operation. It consumes live video and/or monitors a folder, and emits analytics along with recorded media. + * The configuration provided with the sample runs four analytics pipelines: + * `stream1`, `stream2` - two identical pipelines monitoring RTSP stream + * `folderWatchUS` - pipeline monitoring a folder, and generating analytics for files deposited there in the context of US makes/models/license plates. + * `folderWatchEU` - same, but using the context of EU + * Configuration items of interest: + * `./config/sio-pipelines.json` - specifies a set of pipelines to run, and configuration for each + * `./config/sio-box-filter.json` - specifies box filter referenced by pipeline configurations + +* rabbitmq_svc + * RabbitMQ broker. + * It is always a choice, whether to run a dedicated broker along with SIO, or point SIO (and the relevant clients) to your own instance used, perhaps, for other purposes. + +* mcp_svc + * MCP, or Media Control Point service, controls the access to, and live cycle of media items (images and videos) generated by SIO. + +* dbclient_svc + * A sample client, consuming analytics from the AMQP broker, and saving license plates we observe into a database. + * Source code for it is located in `./consumer` + +* rest_svc + * A sample Flask-based REST client providing access to the database `dbclient_svc` writes to, as well as to provide web acccess for folder watch functionality of SIO + * Source code for rest_svc is located in `./backend` + +* UI Client (Python) + * Source for it is in `./ui/python` + * Can (and perhaps should) be ran remotely relatively to the docker-compose. + * Make sure to install requirements with `pip3 install -r ./ui/python/requirements.txt` + * Then run with `cd ./ui/python && python3 ALPRDemo.py` + +## General + +Before getting started, you must copy your `sighthound-license.json` into the `./ALPRDemo/config/` folder. If you do not have a license, please contact [support@sighthound.com](mailto:support@sighthound.com). + + +Next, open a terminal, `cd` into the `./ALPRDemo/` folder, and run the following command to start the services: + +```bash +docker compose up -d +``` + +If you have an NVIDIA GPU installed and properly configured, you can run the following command instead to enable GPU acceleration: + +```bash +SIO_DOCKER_RUNTIME=nvidia docker compose up -d +``` + + diff --git a/deployment-examples/ALPRDemo/backend/Dockerfile b/deployment-examples/ALPRDemo/backend/Dockerfile new file mode 100644 index 0000000..b96bfbf --- /dev/null +++ b/deployment-examples/ALPRDemo/backend/Dockerfile @@ -0,0 +1,14 @@ +FROM python:3.9 + +WORKDIR /usr/src/app +COPY requirements.txt /usr/src/app/ +COPY entrypoint.sh /usr/src/app/ +COPY rest.py /usr/src/app/ +RUN pip3 install -r requirements.txt + +ENV PYTHONPATH=${PYTHONPATH}:/usr/src/app:/usr/src/app/common:/usr/src/app/lib +ENV PYTHONUNBUFFERED=1 + + + +ENTRYPOINT [ "/bin/bash", "-c", "/usr/src/app/entrypoint.sh" ] diff --git a/deployment-examples/ALPRDemo/backend/entrypoint.sh b/deployment-examples/ALPRDemo/backend/entrypoint.sh new file mode 100755 index 0000000..604446f --- /dev/null +++ b/deployment-examples/ALPRDemo/backend/entrypoint.sh @@ -0,0 +1,4 @@ +ls -la /usr/src/app/ + +export FLASK_ENV=development +python3 /usr/src/app/rest.py \ No newline at end of file diff --git a/deployment-examples/ALPRDemo/backend/requirements.txt b/deployment-examples/ALPRDemo/backend/requirements.txt new file mode 100644 index 0000000..36eab1e --- /dev/null +++ b/deployment-examples/ALPRDemo/backend/requirements.txt @@ -0,0 +1,4 @@ +flask +requests +pillow +numpy \ No newline at end of file diff --git a/deployment-examples/ALPRDemo/backend/rest.py b/deployment-examples/ALPRDemo/backend/rest.py new file mode 100644 index 0000000..01e7eae --- /dev/null +++ b/deployment-examples/ALPRDemo/backend/rest.py @@ -0,0 +1,234 @@ +from flask import Flask, request, jsonify, send_file, g +import sqlite3 +import io +import os +import time +import json +import uuid +import traceback +from Database import LicensePlate, LicensePlateDB +from datetime import datetime +from threading import Lock, local +from MCP import MCPClient + + +#======================================================================== +class CacheStore: + def __init__(self, factory_method, param1=None): + """ + Initialize the CachedObject with a factory method and optional parameters. + + :param factory_method: A callable that generates the cached object. + :param param1: Optional parameter 1 for the factory method. + :param param2: Optional parameter 2 for the factory method. + :param param3: Optional parameter 3 for the factory method. + """ + self._thread_local = local() + self._thread_local.cache = None + self._factory_method = factory_method + self._param1 = param1 + + def get(self): + """ + Retrieve the cached object, creating it using the factory method if necessary. + + :return: The cached object. + """ + if not hasattr(self._thread_local, 'cache'): + if self._param1 is None: + self._thread_local.cache = self._factory_method() + else: + self._thread_local.cache = self._factory_method(self._param1) + return self._thread_local.cache + + + + +app = Flask(__name__) + +gDBCache = CacheStore(LicensePlateDB, os.environ.get("DB_PATH", "/data/sighthound/db/lpdb.sqlite")) + + +#======================================================================== +gUploadCache = {} +gUploadCacheMtx = Lock() + +def get_upload_cache_entry(id): + with gUploadCacheMtx: + return gUploadCache.get(id, None) + +def set_upload_cache_entry(id,value): + with gUploadCacheMtx: + gUploadCache[id] = value + +#======================================================================== +def convert_to_epoch(date_str, time_str): + # Combine date and time strings into a single datetime string + datetime_str = f"{date_str} {time_str}" + + # Parse the combined string into a datetime object + dt = datetime.strptime(datetime_str, "%Y%m%d %H%M") + + # Convert the datetime object to an epoch timestamp + epoch_timestamp = int(dt.timestamp()) + + return epoch_timestamp + +#======================================================================== +# Establish a global database connection +def get_db(): + return gDBCache.get() + +#======================================================================== +# Establish a global MCP client object +def create_mcp(): + # Create MCP Client + mcp_conf = {} + mcp_conf["host"] = os.environ.get("MCP_HOST", "mcp_svc") + mcp_conf["port"] = os.environ.get("MCP_PORT", 9097) + mcp_conf["username"] = os.environ.get("MCP_USERNAME", None) + mcp_conf["password"] = os.environ.get("MCP_PASSWORD", None) + mcp = MCPClient(mcp_conf) + return mcp + +gMCPCache = CacheStore(create_mcp) + +def get_mcp(): + return gMCPCache.get() + +#======================================================================== +def plates_between_times(start_time, end_time): + db = get_db() + plates = db.get_by_time_range(start_time, end_time) + plates_as_dicts = [obj.to_dict() for obj in plates] + return jsonify(plates_as_dicts) + +#======================================================================== +@app.route('/plates/bytimeanddate//', methods=['GET']) +@app.route('/plates/bytimeanddate////', methods=['GET']) +def get_plates_between(startdate, starttime, enddate=None, endtime=None): + start_time = convert_to_epoch(startdate, starttime) + if not enddate is None: + end_time = convert_to_epoch(enddate, endtime) + else: + end_time = int(time.time()) + return plates_between_times(start_time, end_time) + +#======================================================================== +@app.route('/plates/latest', methods=['GET']) +@app.route('/plates/latest/', methods=['GET']) +def get_latest_plates(count=10): + db = get_db() + plates = db.get_most_recent(count) + plates_as_dicts = [obj.to_dict() for obj in plates] + return jsonify(plates_as_dicts) + +#======================================================================== +@app.route('/plates/search', methods=['GET']) +@app.route('/plates/search//', methods=['GET']) +@app.route('/plates/search////', methods=['GET']) +def get_plates_matching(startdate=None, starttime=None, enddate=None, endtime=None): + start_time = None + end_time = None + if not startdate is None: + start_time = convert_to_epoch(startdate, starttime) + if not enddate is None: + end_time = convert_to_epoch(enddate, endtime) + else: + end_time = int(time.time()) + search_term = request.args.get('plate') + search_term = search_term.replace('*','%').replace('?','_') + db = get_db() + plates = db.get_by_plate_string(search_term, start_time, end_time) + plates_as_dicts = [obj.to_dict() for obj in plates] + return jsonify(plates_as_dicts) + +#======================================================================== +@app.route('/plates/image/', methods=['GET']) +def get_image(source_id): + image_id = request.args.get('id') + img_data = get_mcp().get_image(source_id, image_id, "source") + + if not img_data is None: + return send_file( + io.BytesIO(img_data), + mimetype='image/jpeg', + as_attachment=False + ) + else: + return jsonify({"error": "Image not found"}), 404 + + + +#======================================================================== +# Route to upload a file +ALLOWED_EXTENSIONS = [ 'jpeg', 'webp', 'bmp', 'jpg', 'png', 'mp4', 'mkv', 'ts' ] + +def allowed_file(filename): + return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS + +@app.route('/folderwatch/upload/', methods=['POST']) +def upload_file(region): + # Check if the post request has the file part + if 'file' not in request.files: + return jsonify({'error': 'No file part in the request'}), 400 + + if region != 'eu' and region != 'us': + return jsonify({'error': 'Invalid region'}), 400 + + file = request.files['file'] + + # If the user does not select a file + if file.filename == '': + return jsonify({'error': 'No selected file'}), 400 + + if not allowed_file(file.filename): + return jsonify({'error': 'File type not allowed'}), 400 + + # Generate a unique filename using UUID and retain the file extension + id = uuid.uuid4() + ext = os.path.splitext(file.filename)[1] + filename = f"{id}{ext}" + filepath = os.path.join(app.config['UPLOAD_FOLDER'], region, filename) + + # Save the file + file.save(filepath) + set_upload_cache_entry(f"{id}", (region, filename)) + + return jsonify({'message': 'File successfully uploaded', 'filename': filename, 'id' : id}), 200 + +#======================================================================== +# Check processing status +@app.route('/folderwatch/status/', methods=['GET']) +def upload_status(upload_id): + fm = get_upload_cache_entry(upload_id) + if fm is None: + print(f"Upload id {upload_id} isn't found. Uploads:") + return jsonify({'error': 'Invalid upload id'}), 400 + + region, filename = fm + pathUploaded = os.path.join(app.config['UPLOAD_FOLDER'], region, filename) + if os.path.isfile(pathUploaded): + return jsonify({'status': 'pending' }), 200 + pathProcessed = os.path.join(app.config['UPLOAD_FOLDER'], region, 'processed', filename + ".json") + if os.path.isfile(pathProcessed): + with open(pathProcessed, 'r') as f: + result_data = json.load(f) + return jsonify({'status': 'completed', 'result':result_data }), 200 + + print(f"File {pathUploaded} or {pathProcessed} isn't found.") + return jsonify({'error': 'File not found'}), 400 + +#======================================================================== +# Get result + +if __name__ == '__main__': + port = int(os.getenv("REST_PORT", 5000)) + host = os.getenv("REST_HOST", '0.0.0.0') + app.config['UPLOAD_FOLDER'] = os.getenv("UPLOAD_FOLDER", '/data/folder-watch-input') + try: + print(f"Running REST provider on {host}:{port}") + app.run(host=host, port=port, debug=True) + except: + print(f"{traceback.format_exc()}") + raise diff --git a/deployment-examples/ALPRDemo/common/Database.py b/deployment-examples/ALPRDemo/common/Database.py new file mode 100644 index 0000000..b8256df --- /dev/null +++ b/deployment-examples/ALPRDemo/common/Database.py @@ -0,0 +1,162 @@ +import sqlite3 +import os +from datetime import datetime, timedelta + +class LicensePlate: + def __init__(self, object_id, region, plate_string, detection_time, source_id, x, y, w, h, imageId): + self.object_id = object_id + self.region = region + self.plate_string = plate_string + self.detection_time = detection_time + self.source_id = source_id + self.x = x + self.y = y + self.w = w + self.h = h + self.imageId = imageId if imageId else "" + + def to_dict(self): + return { + "oid" : self.object_id, + "string" : self.plate_string, + "region" : self.region, + "time" : self.detection_time, + "sourceId" : self.source_id, + "rect" : [ self.x, self.y, self.w, self.h ], + "imageId" : self.imageId + } + +class LicensePlateDB: + def __init__(self, db_path): + os.makedirs(os.path.basename(db_path), exist_ok=True) + self.conn = sqlite3.connect(db_path) + self.create_table() + + def create_table(self): + with self.conn: + self.conn.execute(''' + CREATE TABLE IF NOT EXISTS plates ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + object_id TEXT UNIQUE, + region TEXT, + plate_string TEXT, + detection_time INTEGER, -- Epoch time + source_id TEXT, + x INTEGER, + y INTEGER, + w INTEGER, + h INTEGER, + imageId STRING + ) + ''') + + def add_detection(self, license_plate): + with self.conn: + self.conn.execute(''' + INSERT OR IGNORE INTO plates (object_id, region, plate_string, detection_time, source_id, x, y, w, h, imageId) + VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) + ON CONFLICT(id) + DO UPDATE SET + region = excluded.region, + plate_string = excluded.plate_string, + detection_time = excluded.detection_time, + x = excluded.x, + y = excluded.y, + w = excluded.w, + h = excluded.h, + imageId = excluded.imageId; + ''', ( + license_plate.object_id, license_plate.region, license_plate.plate_string, + int(license_plate.detection_time), license_plate.source_id, + license_plate.x, license_plate.y, license_plate.w, license_plate.h, + license_plate.imageId + )) + + def delete_by_age(self, max_age_days): + cutoff_time = int((datetime.now() - timedelta(days=max_age_days)).timestamp()) + with self.conn: + self.conn.execute(''' + DELETE FROM plates WHERE detection_time < ? + ''', (cutoff_time,)) + + def get_most_recent(self, count): + cur = self.conn.cursor() + cur.execute(''' + SELECT * FROM plates ORDER BY detection_time DESC LIMIT ? + ''', (count,)) + rows = cur.fetchall() + return [LicensePlate(*row[1:]) for row in rows] + + def get_by_time_range(self, start_time, end_time): + cur = self.conn.cursor() + cur.execute(''' + SELECT * FROM plates + WHERE detection_time BETWEEN ? AND ? + ORDER BY detection_time DESC + ''', (int(start_time), int(end_time))) + rows = cur.fetchall() + return [LicensePlate(*row[1:]) for row in rows] + + def get_by_plate_string(self, plate_string, start_time=None, end_time=None): + cur = self.conn.cursor() + if not start_time is None and not end_time is None: + cur.execute(''' + SELECT * FROM plates + WHERE plate_string LIKE ? AND detection_time BETWEEN ? AND ? + ORDER BY detection_time DESC + ''', (plate_string,int(start_time),int(end_time))) + else: + cur.execute(''' + SELECT * FROM plates WHERE plate_string LIKE ? + ''', (plate_string,)) + rows = cur.fetchall() + return [LicensePlate(*row[1:]) for row in rows] + + def close(self): + self.conn.close() + +# Usage example: +if __name__ == '__main__': + db = LicensePlateDB('license_plates.db') + + # Create a new LicensePlate object + current_time = int(datetime.now().timestamp()) + license_plate = LicensePlate( + object_id='123ABC', + region='FL', + plate_string='ABC1234', + detection_time=current_time, + source_id='source-1', + x=50, y=100, w=200, h=300, + imageId="0" + ) + + # Adding a new detection + db.add_detection(license_plate) + + # Update a detection + license_plate.region = 'CA' + license_plate.plate_string = 'XYZ5678' + db.update_detection(license_plate) + + # Query for the 5 most recent detections + recent_detections = db.get_most_recent(5) + for detection in recent_detections: + print(vars(detection)) + + # Query for detections in a time range + start_time = int(datetime(2024, 8, 1).timestamp()) + end_time = int(datetime(2024, 8, 20).timestamp()) + detections_in_range = db.get_by_time_range(start_time, end_time) + for detection in detections_in_range: + print(vars(detection)) + + # Query for detections with a specific plate string + specific_detections = db.get_by_plate_string('XYZ5678') + for detection in specific_detections: + print(vars(detection)) + + # Delete detections older than 30 days + db.delete_by_age(30) + + db.close() diff --git a/deployment-examples/ALPRDemo/config/mcp.yml b/deployment-examples/ALPRDemo/config/mcp.yml new file mode 100644 index 0000000..dd7ac3d --- /dev/null +++ b/deployment-examples/ALPRDemo/config/mcp.yml @@ -0,0 +1,25 @@ +amqp: + enable: true + host: rabbitmq_svc + username: guest + password: guest + maxLengthSize: 100Mb +backend: + host: 0.0.0.0 + port: 9097 +sqlite: + enable: true + path: /data/sighthound/db/sqlite.db + auditLogFile: /data/sighthound/logs/mcp/sqlite.log + auditMaxMb: 100 +media: + path: /data/sighthound/media +cleaner: + enable: true + maxDiskUtilization: 90 + maxMediaSize: 500 MB + auditLogFile: /data/sighthound/logs/mcp/cleaner.log + auditMaxMb: 100 +logs: + level: info + format: human-readable diff --git a/deployment-examples/ALPRDemo/config/rabbitmq-definitions.json b/deployment-examples/ALPRDemo/config/rabbitmq-definitions.json new file mode 100644 index 0000000..af5c9e3 --- /dev/null +++ b/deployment-examples/ALPRDemo/config/rabbitmq-definitions.json @@ -0,0 +1,60 @@ +{ + "rabbit_version": "3.8.20", + "rabbitmq_version": "3.8.20", + "product_name": "RabbitMQ", + "product_version": "3.8.20", + "users": [ + { + "name": "guest", + "password_hash": "YMmuH8796TNxLiwksKPOEfaI4usEoH1lYsdYzhcYXQq5wMlD", + "hashing_algorithm": "rabbit_password_hashing_sha256", + "tags": "administrator", + "limits": {} + } + ], + "vhosts": [ + { + "name": "/" + } + ], + "permissions": [ + { + "user": "guest", + "vhost": "/", + "configure": ".", + "write": ".", + "read": ".*" + } + ], + "topic_permissions": [], + "parameters": [], + "global_parameters": [ + { + "name": "internal_cluster_id", + "value": "rabbitmq-cluster-id-C82kbG-3JTiyYVyQdMbW0g" + } + ], + "policies": [], + "queues": [], + "exchanges": [ + { + "name": "anypipe", + "vhost": "/", + "type": "topic", + "durable": true, + "auto_delete": false, + "internal": false, + "arguments": { "expires" : 60000 } + }, + { + "name": "aqueduct", + "vhost": "/", + "type": "topic", + "durable": true, + "auto_delete": false, + "internal": false, + "arguments": { "expires" : 60000 } + } + ], + "bindings": [] + } \ No newline at end of file diff --git a/deployment-examples/ALPRDemo/config/rabbitmq.conf b/deployment-examples/ALPRDemo/config/rabbitmq.conf new file mode 100644 index 0000000..9a4cc8a --- /dev/null +++ b/deployment-examples/ALPRDemo/config/rabbitmq.conf @@ -0,0 +1,13 @@ +default_user = guest +default_pass = guest + +listeners.tcp.default = 5672 +management.tcp.port = 15672 +web_stomp.tcp.port = 15674 + +management.load_definitions = /etc/rabbitmq/definitions.json + +# This needs to be removed when rabbitmq is exposed outside the docker environment +# in production, as it makes the UI and topics/queues accessible from the loopback guest user +disk_free_limit.absolute = 5368709120B +loopback_users.guest = false diff --git a/deployment-examples/ALPRDemo/config/sio-box-filter.json b/deployment-examples/ALPRDemo/config/sio-box-filter.json new file mode 100644 index 0000000..6e93228 --- /dev/null +++ b/deployment-examples/ALPRDemo/config/sio-box-filter.json @@ -0,0 +1,20 @@ +[ + { + "name": "plateBike_SizeFilter", + "type": "size", + "subtype": "dimension", + "max": 0, + "min": 10, + "classes": ["licenseplate", "motorbike"], + "debug": false + }, + { + "name": "vehicle_SizeFilter", + "type": "size", + "subtype": "dimension", + "max": 0, + "min": 15, + "classes": ["car", "bus", "truck"], + "debug": false + } +] diff --git a/deployment-examples/ALPRDemo/config/sio-pipelines.json b/deployment-examples/ALPRDemo/config/sio-pipelines.json new file mode 100644 index 0000000..814ac75 --- /dev/null +++ b/deployment-examples/ALPRDemo/config/sio-pipelines.json @@ -0,0 +1,93 @@ +{ + "stream1" : { + "pipeline" : "./share/pipelines/VehicleAnalytics/VehicleAnalyticsRTSP.yaml", + "restartPolicy" : "restart", + "parameters" : { + "VIDEO_IN" : "rtsp://live555_svc:554/StreetVideo1.mkv", + "boxFilterConfig" : "/config/sio-box-filter.json", + "detectionModel" : "gen7es", + "lptModel" : "gen7es", + "lptFilter" : "['us']", + "lptMinConfidence" : "0.5", + "sourceId" : "rtsp-stream-1", + "lptPreferAccuracyToSpeed" : "false", + "amqpHost" : "rabbitmq_svc", + "amqpPort" : "5672", + "amqpUser" : "guest", + "amqpPassword" : "guest", + "amqpExchange" : "sio", + "amqpRoutingKey" : "sio", + "amqpErrorOnFailure" : "true", + "recordTo" : "/data/media/output/video/rtsp-stream-1/", + "imageSaveDir" : "/data/media/output/image/rtsp-stream-1/", + "lptStabilizationDelay" : "10", + "useTracker" : "true", + "lptSkipCarsWithoutLPs" : "true", + "updateOnlyOnChange" : "true" + } + }, + "stream2" : { + "pipeline" : "./share/pipelines/VehicleAnalytics/VehicleAnalyticsRTSP.yaml", + "restartPolicy" : "restart", + "parameters" : { + "VIDEO_IN" : "rtsp://live555_svc:554/StreetVideo2.mkv", + "boxFilterConfig" : "/config/sio-box-filter.json", + "detectionModel" : "gen7es", + "lptModel" : "gen7es", + "lptFilter" : "['us']", + "lptMinConfidence" : "0.5", + "sourceId" : "rtsp-stream-2", + "lptPreferAccuracyToSpeed" : "false", + "amqpHost" : "rabbitmq_svc", + "amqpPort" : "5672", + "amqpUser" : "guest", + "amqpPassword" : "guest", + "amqpExchange" : "sio", + "amqpRoutingKey" : "sio", + "amqpErrorOnFailure" : "true", + "recordTo" : "/data/media/output/video/rtsp-stream-2/", + "imageSaveDir" : "/data/media/output/image/rtsp-stream-2/", + "lptStabilizationDelay" : "10", + "useTracker" : "true", + "lptSkipCarsWithoutLPs" : "true", + "updateOnlyOnChange" : "true" + } + }, + "folderWatchUS" : { + "pipeline" : "./share/pipelines/VehicleAnalytics/VehicleAnalyticsFolderWatch.yaml", + "restartPolicy" : "restart", + "parameters" : { + "boxFilterConfig" : "/config/sio-box-filter.json", + "folderPath" : "/data/folder-watch-input/us", + "folderRemoveSourceFiles" : "true", + "folderPollAgeMin" : "0", + "folderPollInterval" : "100", + "folderPollExtensions" : "[ 'jpeg', 'webp', 'bmp', 'jpg', 'png', 'mp4', 'mkv', 'ts' ]", + "detectionModel" : "gen7es", + "lptFilter" : "['us']", + "mmcFilter" : "['us']", + "lptMinConfidence" : "0.5", + "sourceId" : "folder-watch-us", + "lptPreferAccuracyToSpeed" : "true" + } + }, + "folderWatchEU" : { + "pipeline" : "./share/pipelines/VehicleAnalytics/VehicleAnalyticsFolderWatch.yaml", + "restartPolicy" : "restart", + "parameters" : { + "boxFilterConfig" : "/config/sio-box-filter.json", + "folderPath" : "/data/folder-watch-input/eu", + "folderRemoveSourceFiles" : "true", + "folderPollAgeMin" : "0", + "folderPollInterval" : "100", + "folderPollExtensions" : "[ 'jpeg', 'webp', 'bmp', 'jpg', 'png', 'mp4', 'mkv', 'ts' ]", + "detectionModel" : "gen7es", + "lptFilter" : "['eu']", + "mmcFilter" : "['eu']", + "lptMinConfidence" : "0.5", + "sourceId" : "folder-watch-eu", + "lptPreferAccuracyToSpeed" : "true" + } + } + +} diff --git a/deployment-examples/ALPRDemo/consumer/Dockerfile b/deployment-examples/ALPRDemo/consumer/Dockerfile new file mode 100644 index 0000000..c6e2369 --- /dev/null +++ b/deployment-examples/ALPRDemo/consumer/Dockerfile @@ -0,0 +1,15 @@ +FROM python:3.9 + +WORKDIR /usr/src/app +COPY requirements.txt /usr/src/app/ +COPY entrypoint.sh /usr/src/app/ +COPY client.py /usr/src/app/ +COPY SIO.py /usr/src/app/ +RUN pip3 install -r requirements.txt + +ENV PYTHONPATH=${PYTHONPATH}:/usr/src/app:/usr/src/app/common:/usr/src/app/lib +ENV PYTHONUNBUFFERED=1 + + + +ENTRYPOINT [ "/bin/bash", "-c", "/usr/src/app/entrypoint.sh" ] diff --git a/deployment-examples/ALPRDemo/consumer/SIO.py b/deployment-examples/ALPRDemo/consumer/SIO.py new file mode 100644 index 0000000..b14cf20 --- /dev/null +++ b/deployment-examples/ALPRDemo/consumer/SIO.py @@ -0,0 +1,88 @@ +import traceback +import time +import os +from Database import LicensePlateDB, LicensePlate + +class SIO: + def __init__(self, mcp_client, db_conf) -> None: + self.mcp_client = mcp_client + self.db_path = db_conf["path"] + self.db = None + + # ------------------------------------------------------------------------------- + # DB connection needs to be created in same thread as the callback + # ------------------------------------------------------------------------------- + def initDbConnection(self): + self.db = LicensePlateDB(self.db_path) + + # ------------------------------------------------------------------------------- + # Get box object + # ------------------------------------------------------------------------------- + def getBox(self, obj): + try: + boxObj = obj["box"] + box = [ boxObj["x"], boxObj["y"], boxObj["width"], boxObj["height"] ] + except: + box = [ 0, 0, 0, 0 ] + return box + + # ------------------------------------------------------------------------------- + # Get LP attributes + # ------------------------------------------------------------------------------- + def getLPInfo(self, lps, lpKey): + lp = lps[lpKey] + lpString = lp.get("attributes", {}).get("lpString", {}).get("value", {}) + lpRegion = lp.get("attributes", {}).get("lpRegion", {}).get("value", {}) + lpBox = self.getBox(lp) + + return lpString, lpRegion, lpBox + + # ------------------------------------------------------------------------------- + # Get image ID associated with current message or None + # ------------------------------------------------------------------------------- + def getFrameImageID(self, message): + mediaEvents = message.get("mediaEvents", {}) + for event in mediaEvents: + if event.get("type", None) == "image": + return event.get("msg", None) + return None + + # ------------------------------------------------------------------------------- + def parseSIOMessage(self, message): + sourceId = message.get("sourceId", "unknown") + frameTimestamp = message.get("frameTimestamp","0") + + mc = message.get("metaClasses", {}) + lps = mc.get("licensePlates", {}) + + # Get image associated with the frame (this isn't guaranteed) + imageId = self.getFrameImageID(message) + + # Process license plates + for lpKey in lps.keys(): + lpString, lpRegion, lpBox = self.getLPInfo(lps, lpKey) + self.onLicensePlate(sourceId, lpKey, frameTimestamp, lpString, lpRegion, lpBox, imageId) + + + # ------------------------------------------------------------------------------- + def onLicensePlate(self, sourceId, uid, frameTimestamp, lpString, lpRegion, lpBox, imageId): + frameTimestampValue = int(frameTimestamp) + timestamp_str = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(frameTimestampValue/1000)) + + lp = LicensePlate(uid, lpRegion, lpString, frameTimestamp, sourceId, lpBox[0], lpBox[1], lpBox[2], lpBox[3], imageId) + try: + self.db.add_detection(lp) + print(f"{timestamp_str} Got LP source={sourceId}, uid={uid} time={frameTimestamp} string={lpString} region={lpRegion} box={lpBox} imageId={imageId}") + except: + print(f"{timestamp_str} Failed to add to DB: source={sourceId}, uid={uid} time={frameTimestamp} string={lpString} region={lpRegion} box={lpBox}") + print(f"Failed to insert/update LP: {traceback.format_exc()}") + + + + # ------------------------------------------------------------------------------- + def callback(self, message): + try: + self.parseSIOMessage(message) + except Exception as e: + print(f"Caught exception {e} handling callback with message {str(message)}") + traceback.print_exc() diff --git a/deployment-examples/ALPRDemo/consumer/client.py b/deployment-examples/ALPRDemo/consumer/client.py new file mode 100644 index 0000000..1f6cb37 --- /dev/null +++ b/deployment-examples/ALPRDemo/consumer/client.py @@ -0,0 +1,47 @@ +#!/usr/bin/env python3 +import threading +import AMQPListener as amqp +from MCP import MCPClient +from SIO import SIO +import os + +def get_args(): + amqp_conf, mcp_conf, db_conf = {}, {}, {} + amqp_conf["host"] = os.environ.get("AMQP_HOST", "rabbitmq_svc") + amqp_conf["port"] = os.environ.get("AMQP_PORT", 5672) + amqp_conf["exchange"] = os.environ.get("AMQP_EXCHANGE", "sio") + amqp_conf["routing_key"] = os.environ.get("AMQP_ROUTING_KEY", "#") + print ("AMQP configuration:", amqp_conf) + mcp_conf["host"] = os.environ.get("MCP_HOST", "mcp_svc") + mcp_conf["port"] = os.environ.get("MCP_PORT", 9097) + mcp_conf["username"] = os.environ.get("MCP_USERNAME", None) + mcp_conf["password"] = os.environ.get("MCP_PASSWORD", None) + print ("MCP configuration:", amqp_conf) + db_conf["path"] = os.environ.get("DB_PATH", "/data/sighthound/db/lpdb.sqlite") + return amqp_conf, mcp_conf, db_conf + + +def main(): + # getting the required information from the user + amqp_conf, mcp_conf, db_conf = get_args() + # Create an AMQP listener + amqp_listener = amqp.AMQPListener(amqp_conf) + # Create MCP Client + mcp_client = MCPClient(mcp_conf) + # Create SIO Client + sio_client = SIO(mcp_client, db_conf) + # Register the callback + amqp_listener.set_callback(sio_client.callback) + def start_amqp(): + sio_client.initDbConnection() + amqp_listener.start() + # Start the stream and the listener in parallel + amqp_thread = threading.Thread(target=start_amqp) + amqp_thread.start() + amqp_thread.join() + print("Exiting...") + + + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/deployment-examples/ALPRDemo/consumer/entrypoint.sh b/deployment-examples/ALPRDemo/consumer/entrypoint.sh new file mode 100755 index 0000000..17af0ac --- /dev/null +++ b/deployment-examples/ALPRDemo/consumer/entrypoint.sh @@ -0,0 +1,3 @@ +ls -la /usr/src/app/ + +python3 /usr/src/app/client.py \ No newline at end of file diff --git a/deployment-examples/ALPRDemo/consumer/requirements.txt b/deployment-examples/ALPRDemo/consumer/requirements.txt new file mode 100644 index 0000000..a4a00bb --- /dev/null +++ b/deployment-examples/ALPRDemo/consumer/requirements.txt @@ -0,0 +1,4 @@ +pika >= 1.3.1 +shapely >= 1.8.5 +requests >= 2.26.0 +pillow \ No newline at end of file diff --git a/deployment-examples/ALPRDemo/docker-compose.yml b/deployment-examples/ALPRDemo/docker-compose.yml new file mode 100644 index 0000000..26c56ce --- /dev/null +++ b/deployment-examples/ALPRDemo/docker-compose.yml @@ -0,0 +1,159 @@ +version: "3" +services: + + # ========================= RTSP Server providing demo videos ========================= + # By default pipelines.json will point to streams served by this container. + # If you point to your own cameras or streams, this container has no other + # function and can be disabled. + live555_svc: + image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/live555:2.0.4-examples + container_name: sample-live555 + restart: unless-stopped + ports: + - "8554:554" + volumes: + # A local folder we can drop videos into and have those references via RTSP URI + # Keep in mind, the videos need to be H264 MKV to be properly served by the server + - ${DEMO_VIDEO-./videos}:/mnt/data + networks: + core_sighthound: + aliases: + - live555_svc + + + # ========================= SIO ALPR Analytics ========================= + analytics_svc: + image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:${SIO_RELEASE-r240909}${SIO_DOCKER_TAG_VARIANT} + restart: unless-stopped + environment: + # Location where SIO will place generated model engine files + - SIO_DATA_DIR=/data/.sio + # We need this to see output from Python extension module + - PYTHONUNBUFFERED=1 + # Container runtime defaults to `runc` if SIO_DOCKER_RUNTIME not set. Use `nvidia` if GPU is installed. + runtime: ${SIO_DOCKER_RUNTIME-runc} + volumes: + # Read-only shared folder for data exchange with host / other containers. + # We'll use it for license, config files, etc. + - ./config:/config:ro + # Writable shared folder for data exchange with host + # We'll use it for storing the generated model files, data exchange folder, etc. + - ${HOME-./data}/alprdemo:/data + entrypoint: + - /sighthound/sio/bin/runPipelineSet + # Pipeline configuration file + - /config/sio-pipelines.json + # License at the path accessible in the container + - --license-path=/config/sighthound-license.json + # Log level (info, debug, trace) + - --log=${SIO_LOG_LEVEL-info} + depends_on: + # This dependency can be removed with live555 if no longer necessary. + - live555_svc + # This dependency can be removed if the aggregation component is changed to an external broker + - rabbitmq_svc + # We'd want the consumer of the media be up before it's being generated + - mcp_svc + networks: + core_sighthound: + aliases: + - analytics_svc + + + # ========================= AMQP Broker ========================= + rabbitmq_svc: + container_name: rabbitmq_svc + image: docker.io/rabbitmq:3.8-management + hostname: rabbitmq_svc + restart: unless-stopped + volumes: + - ./config/rabbitmq-definitions.json:/etc/rabbitmq/definitions.json + - ./config/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf + - rabbitmq_persistent_storage:/var/lib/rabbitmq + ports: + - "5672:5672" + - "15672:15672" + healthcheck: + test: rabbitmq-diagnostics check_port_connectivity + interval: 3s + timeout: 30s + retries: 3 + networks: + core_sighthound: + aliases: + - rabbitmq_svc + + + # ========================= Media Control Point ========================= + mcp_svc: + container_name: mcp_svc + image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/mcp:${MCP_DOCKER_TAG-1.4.3} + restart: unless-stopped + mem_limit: 1G + mem_reservation: 512M + volumes: + - ${HOME-./data}/alprdemo/media:/data/sighthound/media:rw + - ${HOME-./data}/alprdemo/logs/mcp:/data/sighthound/logs/mcp:rw + - ${HOME-./data}/alprdemo/mcpdb:/data/sighthound/db:rw + - ./config/mcp.yml:/etc/mcpd/default.json:ro + ports: + - "9097:9097" + depends_on: + # This dependency can be removed if the aggregation component is changed to an external broker + - rabbitmq_svc + networks: + core_sighthound: + aliases: + - mcp_svc + + # ========================= Data Consumer and Persistence ========================= + dbclient_svc: + build: + context: consumer + dockerfile: Dockerfile + restart: unless-stopped + volumes: + - ../ClientLib/lib:/usr/src/app/lib:ro + - ./common:/usr/src/app/common:ro + - ${HOME-./data}/alprdemo/alprdb:/data/sighthound/db:rw + depends_on: + # This dependency can be removed with live555 if no longer necessary. + - live555_svc + # This dependency can be removed if the aggregation component is changed to an external broker + - rabbitmq_svc + # We'd want the consumer of the media be up before it's being generated + - mcp_svc + networks: + core_sighthound: + aliases: + - dbclient_svc + + # ========================= REST API =============================================== + rest_svc: + build: + context: backend + dockerfile: Dockerfile + restart: unless-stopped + environment: + - REST_PORT=8888 + ports: + - "8888:8888" + volumes: + - ../ClientLib/lib:/usr/src/app/lib:ro + - ./common:/usr/src/app/common:ro + - ${HOME-./data}/alprdemo/alprdb:/data/sighthound/db:rw + - ${HOME-./data}/alprdemo/folder-watch-input:/data/folder-watch-input:rw + depends_on: + # We'd want the consumer of the media be up before it's being generated + - mcp_svc + networks: + - core_sighthound + +# ========================= Common components Broker ========================= +networks: + core_sighthound: + external: false + driver: bridge + +volumes: + rabbitmq_persistent_storage: diff --git a/deployment-examples/ALPRDemo/run.sh b/deployment-examples/ALPRDemo/run.sh new file mode 100755 index 0000000..32ac3b4 --- /dev/null +++ b/deployment-examples/ALPRDemo/run.sh @@ -0,0 +1,9 @@ + +# Remove all the sample app data before running, but keep the model cache +sudo mv ~/alprdemo ~/alprdemo.old +mkdir -p ~/alprdemo +sudo mv ~/alprdemo.old/.sio ~/alprdemo/ +sudo rm -rf ~/alprdemo.old +# Run assuming nVidia runtime available. Running 2 live and 2 folder watch pipeline +# doesn't hold up with CPU-only inference +SIO_DOCKER_RUNTIME=nvidia docker compose up --build diff --git a/deployment-examples/ALPRDemo/ui/python/ALPRUI.py b/deployment-examples/ALPRDemo/ui/python/ALPRUI.py new file mode 100644 index 0000000..81c9f15 --- /dev/null +++ b/deployment-examples/ALPRDemo/ui/python/ALPRUI.py @@ -0,0 +1,588 @@ +import wx +import wx.aui +import wx.html2 +import requests +import urllib +import json +import io +import os +import argparse +import threading +import time +import subprocess +import cv2 +from datetime import datetime +from SIOParser import SIOParser + +def epoch_to_offset(epoch_timestamp): + return datetime.utcfromtimestamp(epoch_timestamp/1000).strftime('%H:%M:%S') + +# Function to convert epoch timestamp to string +def epoch_to_string(epoch_timestamp): + # Convert epoch timestamp to datetime object + dt = datetime.fromtimestamp(epoch_timestamp/1000) + + # Format the datetime object to a string + # Change the format string to suit your needs + return dt.strftime('%Y-%m-%d %H:%M:%S') + +class SettingsDialog(wx.Dialog): + def __init__(self, parent, settings): + super(SettingsDialog, self).__init__(parent, title="Settings") + + self.settings = settings + + sizer = wx.BoxSizer(wx.VERTICAL) + + self.ip_label = wx.StaticText(self, label="API IP:") + self.ip_text = wx.TextCtrl(self, value=self.settings["api_ip"]) + + self.port_label = wx.StaticText(self, label="API Port:") + self.port_text = wx.TextCtrl(self, value=self.settings["api_port"]) + + self.refresh_rate_label = wx.StaticText(self, label="Refresh Rate (seconds):") + self.refresh_rate_text = wx.TextCtrl(self, value=str(self.settings["refresh_rate"])) + + self.max_entries_label = wx.StaticText(self, label="Max Entries:") + self.max_entries_text = wx.TextCtrl(self, value=str(self.settings["max_entries"])) + + hbox = wx.BoxSizer(wx.HORIZONTAL) + self.save_button = wx.Button(self, label="Save") + self.cancel_button = wx.Button(self, label="Cancel") + + sizer.Add(self.ip_label, 0, wx.ALL, 5) + sizer.Add(self.ip_text, 0, wx.EXPAND | wx.ALL, 5) + sizer.Add(self.port_label, 0, wx.ALL, 5) + sizer.Add(self.port_text, 0, wx.EXPAND | wx.ALL, 5) + sizer.Add(self.refresh_rate_label, 0, wx.ALL, 5) + sizer.Add(self.refresh_rate_text, 0, wx.EXPAND | wx.ALL, 5) + sizer.Add(self.max_entries_label, 0, wx.ALL, 5) + sizer.Add(self.max_entries_text, 0, wx.EXPAND | wx.ALL, 5) + sizer.Add(self.save_button, 0, wx.ALIGN_CENTER | wx.ALL, 5) + sizer.Add(self.cancel_button, 0, wx.ALIGN_CENTER | wx.ALL, 5) + + + self.save_button.Bind(wx.EVT_BUTTON, self.onSave) + self.cancel_button.Bind(wx.EVT_BUTTON, self.onCancel) + + self.SetSizerAndFit(sizer) + self.Center() + + def onSave(self, event): + try: + self.settings["api_ip"] = self.ip_text.GetValue() + self.settings["api_port"] = self.port_text.GetValue() + self.settings["refresh_rate"] = int(self.refresh_rate_text.GetValue()) + self.settings["max_entries"] = int(self.max_entries_text.GetValue()) + self.EndModal(wx.ID_OK) + except ValueError: + wx.MessageBox("Please enter valid values for all fields.", "Error", wx.OK | wx.ICON_ERROR) + + def onCancel(self, event): + self.EndModal(wx.ID_CANCEL) + +class MainFrame(wx.Frame): + # ========================================================================= + def __init__(self, *args, **kw): + super(MainFrame, self).__init__(*args, **kw) + + self.settings = { + "api_ip": "10.10.10.20", + "api_port": "8888", + "refresh_rate": 10, + "max_entries": 50 + } + + self.data = None + + self.initUI() + self.Center() + self.startAutoRefresh() + self.BringToFront() # Bring the frame to the front + + # ========================================================================= + def initUI(self): + self.panel = wx.Panel(self) + + # Create the menu bar + menu_bar = wx.MenuBar() + file_menu = wx.Menu() + settings_item = file_menu.Append(wx.ID_ANY, "&Settings", "Open settings dialog") + exit_item = file_menu.Append(wx.ID_EXIT, "&Exit", "Exit application") + menu_bar.Append(file_menu, "&File") + self.SetMenuBar(menu_bar) + + # Bind menu events + self.Bind(wx.EVT_MENU, self.onSettings, settings_item) + self.Bind(wx.EVT_MENU, self.onExit, exit_item) + + # Create the notebook with tabs + self.notebook = wx.Notebook(self.panel) + self.current_tab = wx.Panel(self.notebook) + self.search_tab = wx.Panel(self.notebook) + self.file_tab = wx.Panel(self.notebook) + + self.notebook.AddPage(self.current_tab, "Live Feed") + self.notebook.AddPage(self.search_tab, "Search Live Feed") + self.notebook.AddPage(self.file_tab, "File Upload") + self.notebook.Bind(wx.EVT_NOTEBOOK_PAGE_CHANGED, self.onTabChanged) + + # Layout for the Current tab + current_sizer = wx.BoxSizer(wx.VERTICAL) + self.refresh_button = wx.Button(self.current_tab, label="Refresh") + current_sizer.Add(self.refresh_button, 0, wx.ALIGN_CENTER | wx.ALL, 5) + self.current_tab.SetSizer(current_sizer) + + # Layout for the Search tab + search_sizer = wx.BoxSizer(wx.VERTICAL) + self.wildcard_label = wx.StaticText(self.search_tab, label="Search value:") + self.wildcard_text = wx.TextCtrl(self.search_tab) + self.date_range_label = wx.StaticText(self.search_tab, label="Date Range (YYYYMMDD or YYYYMMDD-HHMM):") + self.date_range_start = wx.TextCtrl(self.search_tab) + self.date_range_end = wx.TextCtrl(self.search_tab) + self.search_button = wx.Button(self.search_tab, label="Search") + search_sizer.Add(self.wildcard_label, 0, wx.ALL, 5) + search_sizer.Add(self.wildcard_text, 0, wx.EXPAND | wx.ALL, 5) + search_sizer.Add(self.date_range_label, 0, wx.ALL, 5) + search_sizer.Add(self.date_range_start, 0, wx.EXPAND | wx.ALL, 5) + search_sizer.Add(self.date_range_end, 0, wx.EXPAND | wx.ALL, 5) + search_sizer.Add(self.search_button, 0, wx.ALIGN_CENTER | wx.ALL, 5) + self.search_tab.SetSizer(search_sizer) + + + # Layout for the File tab + file_sizer = wx.BoxSizer(wx.VERTICAL) + self.uploaded_files_list = wx.ListCtrl(self.file_tab, style=wx.LC_REPORT) + self.uploaded_files_list.InsertColumn(0, 'File Path') + self.uploaded_files_list.InsertColumn(1, 'File ID') + self.uploaded_files_list.InsertColumn(2, 'Status') + self.uploaded_files_list.Bind(wx.EVT_LIST_ITEM_SELECTED, self.onFileListItemSelected) + self.upload_button = wx.Button(self.file_tab, label="Upload ...") + self.refresh_file_button = wx.Button(self.file_tab, label="Refresh") + file_sizer.Add(self.uploaded_files_list, 0, wx.EXPAND | wx.ALL, 5) + file_button_sizer = wx.BoxSizer(wx.HORIZONTAL) + file_button_sizer.Add(self.upload_button, 0, wx.ALIGN_LEFT | wx.ALL, 5) + file_button_sizer.Add(self.refresh_file_button, 0, wx.ALIGN_LEFT | wx.ALL, 5) + file_sizer.Add(file_button_sizer) + self.file_tab.SetSizer(file_sizer) + self.uploaded_files_results = {} + + + self.list_box = self.initListCtrl(self.panel) + self.image_ctrl = wx.StaticBitmap(self.panel) + self.lp_ctrl = wx.StaticBitmap(self.panel, size=(100,60)) + + # Layout for the main panel + main_sizer = wx.BoxSizer(wx.VERTICAL) + main_sizer.Add(self.notebook, 1, wx.EXPAND) + main_sizer.Add(self.list_box, 1, wx.EXPAND | wx.ALL, 5) + main_sizer.Add(self.image_ctrl, 1, wx.EXPAND | wx.ALL, 5) + main_sizer.Add(self.lp_ctrl, 1, wx.ALIGN_CENTER | wx.ALL, 5) + self.panel.SetSizer(main_sizer) + + # Bind events + self.Bind(wx.EVT_BUTTON, self.onRefreshFile, self.refresh_file_button) + self.Bind(wx.EVT_BUTTON, self.onRefreshCurrent, self.refresh_button) + self.Bind(wx.EVT_BUTTON, self.onSearch, self.search_button) + self.Bind(wx.EVT_BUTTON, self.onUpload, self.upload_button) + + + # ========================================================================= + def onUpload(self,event): + with wx.FileDialog(self.panel, "Choose a file to upload", wildcard="*.*", + style=wx.FD_OPEN | wx.FD_FILE_MUST_EXIST) as file_dialog: + + if file_dialog.ShowModal() == wx.ID_CANCEL: + return # User cancelled the operation + + # Get the selected file path + filepath = file_dialog.GetPath() + + # Only upload each file once + item_count = self.uploaded_files_list.GetItemCount() + for i in range(item_count): + item_text = self.uploaded_files_list.GetItemText(i) + if item_text == filepath: + wx.MessageBox(f"File {filepath} already has been uploaded", "Error", wx.OK | wx.ICON_ERROR) + return + + # Perform the file upload (here you can adjust the URL and parameters) + try: + with open(filepath, 'rb') as f: + files = {'file': (os.path.basename(filepath), f)} + uri = f"{self.apiRoot()}/folderwatch/upload/us" + + response = requests.post(uri, files=files) + + # Check response status + if response.status_code != 200: + wx.MessageBox(f"An error occurred: {response.reason}", "Error", wx.OK | wx.ICON_ERROR) + return + + id = response.json().get('id', None) + if id is None: + wx.MessageBox(f"An error occurred: {response.reason}", "Error", wx.OK | wx.ICON_ERROR) + return + + idx = self.uploaded_files_list.GetItemCount() + self.uploaded_files_list.InsertItem(idx, filepath) + self.uploaded_files_list.SetItem(idx, 1, id) + self.uploaded_files_list.SetItem(idx, 2, "Uploaded") + + except Exception as e: + wx.MessageBox(f"An error occurred: {str(e)}", "Error", wx.OK | wx.ICON_ERROR) + + + # ========================================================================= + def onTabChanged(self,event): + self.clearSharedUIState() + + # ========================================================================= + def clearSharedUIState(self): + self.list_box.DeleteAllItems() + self.clearImage(self.image_ctrl) + self.clearImage(self.lp_ctrl) + self.panel.Layout() + + # ========================================================================= + def initListCtrl(self,parent): + ctrl = wx.ListCtrl(parent, style=wx.LC_REPORT) + ctrl.InsertColumn(0, 'Time') + ctrl.InsertColumn(1, 'Plate/State') + ctrl.InsertColumn(2, 'Source') + ctrl.InsertColumn(3, 'UID') + # Bind the single-click event + ctrl.Bind(wx.EVT_LIST_ITEM_SELECTED, self.onListItemSelected) + return ctrl + + # ========================================================================= + def clearImage(self, ctrl): + size = ctrl.GetSize() + empty_image = wx.Image(*size) + empty_bitmap = wx.Bitmap(empty_image) + ctrl.SetBitmap(empty_bitmap) + + # ========================================================================= + def renderImage(self, ctrl, srcImage): + # Get the size of the static bitmap control + size = ctrl.GetSize() + # Calculate the aspect ratio of the original image + aspectRatio = float(srcImage.GetWidth()) / float(srcImage.GetHeight()) + # Calculate the new width and height to fit the image within the available space + newWidth = min(size[0], int(size[1] * aspectRatio)) + newHeight = min(size[1], int(newWidth / aspectRatio)) + # Scale the image to fit the size of the control + renderedImage = srcImage.Scale(newWidth, newHeight, wx.IMAGE_QUALITY_HIGH) + # Convert wx.Image to wx.Bitmap + bitmap = wx.Bitmap(renderedImage) + + # Set the wx.StaticBitmap with the loaded image + ctrl.SetBitmap(bitmap) + ctrl.Refresh() + self.panel.Layout() + + # ========================================================================= + def updateImage(self, content, type, box): + # + # Handle rendered image resize + # + + + # Check the Content-Type and initialize wx.Image accordingly + if 'image/jpeg' in type: + t = wx.BITMAP_TYPE_JPEG + elif 'image/png' in type: + t = wx.BITMAP_TYPE_PNG + elif 'image/gif' in type: + t = wx.BITMAP_TYPE_GIF + else: + print(f"CT is {type}") + t = wx.BITMAP_TYPE_JPEG + + srcImage = wx.Image(io.BytesIO(content), t) + self.renderImage(self.image_ctrl, srcImage) + + crop = srcImage.GetSubImage(box) + self.renderImage(self.lp_ctrl, crop) + + + # ========================================================================= + def onFileListItemSelected(self,event): + self.clearSharedUIState() + + list = event.GetEventObject() + index = event.GetIndex() + filename = self.uploaded_files_list.GetItemText(index) + msg = self.uploaded_files_results.get(filename, None) + if not msg is None: + parser = SIOParser() + parser.parseSIOResult(msg) + self.data = parser.getLPs() + self.populateListWithData(self.data, self.list_box, True) + + # ========================================================================= + def onListItemSelected(self,event): + index = event.GetIndex() + + if self.notebook.GetSelection() == 2: + self.populateLocalFrame(index) + else: + self.populateRemoteFrame(index) + + # ========================================================================= + def populateLocalFrame(self, index): + index = self.uploaded_files_list.GetFirstSelected() + if index < 0: + return + filename = self.uploaded_files_list.GetItemText(index) + + lpIdx = self.list_box.GetFirstSelected() + lp = self.data[lpIdx] + offset = lp['frameid'] + box = lp['box'] + + print(f"populateLocalFrame: lp={lp}") + frame = self.getFrame(filename, offset) + if not frame: + return + + self.renderImage(self.image_ctrl, frame) + + crop = frame.GetSubImage(box) + self.renderImage(self.lp_ctrl, crop) + + + + # ========================================================================= + def getFrame(self, filename, frame_pos): + """ + Extracts a frame from a video file at the specified offset in seconds. + + :param filename: The path to the video file. + :param offset_seconds: The time offset in seconds where the frame is extracted. + :return: The extracted frame as an image, or None if it fails. + """ + + # Open the video file + video_capture = cv2.VideoCapture(filename) + + # Get frames per second (FPS) to calculate frame position + ext = os.path.splitext(filename)[1] + + # Set the video capture to the specific frame position + video_capture.set(cv2.CAP_PROP_POS_FRAMES, frame_pos) + + # Read the frame + success, npframe = video_capture.read() + + # Release the video capture object + video_capture.release() + + if success: + # Convert BGR to RGB if using OpenCV + if len(npframe.shape) == 3 and npframe.shape[2] == 3: + npframe = cv2.cvtColor(npframe, cv2.COLOR_BGR2RGB) + + height, width = npframe.shape[:2] + + # Convert the numpy array to a wx.Image + wx_image = wx.Image(width, height, npframe) + return wx_image + else: + print(f"Error: Could not read the frame at the specified time {offset_seconds} and fps {fps}.") + return None + + + # ========================================================================= + def populateRemoteFrame(self, index): + id = self.data[index]["imageId"] + src = self.data[index]["sourceId"] + box = self.data[index]["rect"] + + uri = f"{self.apiRoot()}/plates/image/{src}" + try: + response = requests.get(uri,params={'id':id}) + if response.status_code != 200: + raise Exception(f"Error {response.status_code} retrieving latest results from {uri}") + self.updateImage(response.content, response.headers['Content-Type'], box) + except Exception as e: + print(f"Error updating image contents: {e}") + self.clearImage(self.image_ctrl) + self.clearImage(self.lp_ctrl) + + # ========================================================================= + def onSettings(self, event): + dlg = SettingsDialog(self, self.settings) + if dlg.ShowModal() == wx.ID_OK: + self.updateSettings() + dlg.Destroy() + + # ========================================================================= + def onExit(self, event): + self.stopAutoRefresh() + self.Destroy() + + # ========================================================================= + def updateSettings(self): + self.stopAutoRefresh() + self.startAutoRefresh() + + # ========================================================================= + def startAutoRefresh(self): + self.auto_refresh = True + self.refresh_thread = threading.Thread(target=self.autoRefresh) + self.refresh_thread.start() + + # ========================================================================= + def stopAutoRefresh(self): + self.auto_refresh = False + self.refresh_thread.join() + + # ========================================================================= + def autoRefresh(self): + while self.auto_refresh: + if self.notebook.GetSelection() == 0: + wx.CallAfter(self.updateCurrentTab) + elif self.notebook.GetSelection() == 2: + wx.CallAfter(self.updateFileTab) + time.sleep(self.settings["refresh_rate"]) + + # ========================================================================= + def populateListWithData(self,data,ctrl,isoffset): + ctrl.DeleteAllItems() + index = 0 + for entry in data: + # {'oid': 'rtsp-stream-2-lp-298-1724884952043', 'rect': [203, 279, 58, 39], 'region': 'Florida', 'sourceId': 'rtsp-stream-2', 'string': 'HPP8X', 'time': 1724885848430}, + if isoffset: + dt = epoch_to_offset(int(entry['time'])) + else: + dt = epoch_to_string(int(entry['time'])) + ctrl.InsertItem(index, f"{dt}") + ctrl.SetItem(index, 1, f"{entry['string']}/{entry['region']}") + ctrl.SetItem(index, 2, f"{entry['sourceId']}") + ctrl.SetItem(index, 3, f"{entry['oid']}") + index = index + 1 + + # ========================================================================= + def apiRoot(self): + return f"http://{self.settings['api_ip']}:{self.settings['api_port']}" + + # ========================================================================= + def updateFileTab(self): + item_count = self.uploaded_files_list.GetItemCount() + for i in range(item_count): + item_file = self.uploaded_files_list.GetItemText(i) + item_id = self.uploaded_files_list.GetItemText(i,1) + item_status = self.uploaded_files_list.GetItemText(i,2) + + # if item_status in [ "error", "completed" ]: + # continue + + if not os.path.isfile(item_file): + self.uploaded_files_list.SetItem(i,2,"missing") + continue + + uri = f"{self.apiRoot()}/folderwatch/status/{item_id}" + + response = requests.get(uri) + + # Check response status + if response.status_code != 200: + self.uploaded_files_list.SetItem(i,2,"error") + continue + + j = response.json() + status = j.get("status", "error") + self.uploaded_files_list.SetItem(i,2,status) + + if status == "completed": + result = j.get("result","") + self.uploaded_files_results[item_file] = result + + + # ========================================================================= + def updateCurrentTab(self): + try: + response = requests.get(f"{self.apiRoot()}/plates/latest/{self.settings['max_entries']}") + if response.status_code != 200: + raise Exception(f"Error {response.status_code} retrieving latest results") + return + self.data = response.json() + self.populateListWithData(self.data, self.list_box, False) + except Exception as e: + print(f"Error updating current tab: {e}") + + # ========================================================================= + def onRefreshCurrent(self, event): + if self.notebook.GetSelection() == 0: + self.updateCurrentTab() + + # ========================================================================= + def onRefreshFile(self, event): + if self.notebook.GetSelection() == 2: + self.updateFileTab() + + # ========================================================================= + def validateDateTime(self,ctrl,name): + dt = ctrl.GetValue() + if len(dt): + dtTuple = dt.split('-') + if len(dtTuple) == 1 and len(dtTuple[0]) == 8: + dt = dtTuple[0] + "/0000" + elif len(dtTuple) == 2 and len(dtTuple[0]) == 8 and len(dtTuple[1]) == 4: + dt = dtTuple[0] + "/" + dtTuple[1] + else: + raise Exception(f"Please use YYYYMMDD-HHMM or YYYYMMDD for {name} date") + return dt + + # ========================================================================= + def onSearch(self, event): + # Implement search functionality here + searchTerm = self.wildcard_text.GetValue() + if (len(searchTerm)): + verb = "search" + params={'plate':searchTerm} + else: + verb = "bytimeanddate" + params = {} + try: + start = self.validateDateTime(self.date_range_start,"start") + end = self.validateDateTime(self.date_range_end,"end") + except Exception as e: + wx.MessageBox(str(e), "Error", wx.OK | wx.ICON_ERROR) + return + uri = f"{self.apiRoot()}/plates/{verb}" + if start: + uri = uri + f"/{start}" + if end: + uri = uri + f"/{end}" + try: + response = requests.get(uri, params=params) + if response.status_code != 200: + raise Exception(f"Error {response.status_code} retrieving search results") + self.data = response.json() + self.populateListWithData(self.data, self.list_box, False) + except Exception as e: + wx.MessageBox(f"Error updating search tab: {e}", "Error", wx.OK | wx.ICON_ERROR) + + # ========================================================================= + def onClose(self, event): + self.stopAutoRefresh() + self.Destroy() + + # ========================================================================= + # Bring the app to front + def BringToFront(self): + if wx.Platform == "__WXMAC__": + try: + script = 'tell application "System Events" to set frontmost of process "Python" to true' + subprocess.call(['osascript', '-e', script]) + except Exception as e: + print("Error bringing window to front:", e) + + +app = wx.App(False) +frame = MainFrame(None, title="ALPR Demo", size=(800, 600)) +frame.Bind(wx.EVT_CLOSE, frame.onClose) +frame.Show() +app.MainLoop() diff --git a/deployment-examples/ALPRDemo/ui/python/SIOParser.py b/deployment-examples/ALPRDemo/ui/python/SIOParser.py new file mode 100644 index 0000000..d42034f --- /dev/null +++ b/deployment-examples/ALPRDemo/ui/python/SIOParser.py @@ -0,0 +1,64 @@ +import traceback +import time +import os + +class SIOParser: + def __init__(self) -> None: + self.lps = {} + + # ------------------------------------------------------------------------------- + # Get box object + # ------------------------------------------------------------------------------- + def getBox(self, obj): + try: + boxObj = obj["box"] + box = [ boxObj["x"], boxObj["y"], boxObj["width"], boxObj["height"] ] + except: + box = [ 0, 0, 0, 0 ] + return box + + # ------------------------------------------------------------------------------- + # Get LP attributes + # ------------------------------------------------------------------------------- + def getLPInfo(self, lps, lpKey): + lp = lps[lpKey] + lpString = lp.get("attributes", {}).get("lpString", {}).get("value", {}) + lpRegion = lp.get("attributes", {}).get("lpRegion", {}).get("value", {}) + lpBox = self.getBox(lp) + + return lpString, lpRegion, lpBox + + # ------------------------------------------------------------------------------- + def parseSIOResult(self, result): + frames = sorted([int(key) for key in result]) + for frame in frames: + message = result[str(frame)] + sourceId = message.get("sourceId", "unknown") + frameTimestamp = message.get("frameTimestamp","0") + + mc = message.get("metaClasses", {}) + lps = mc.get("licensePlates", {}) + + + # Process license plates + for lpKey in lps.keys(): + lpString, lpRegion, lpBox = self.getLPInfo(lps, lpKey) + if lpKey in self.lps: + if self.lps[lpKey][3] == lpString and self.lps[lpKey][2] == lpRegion: + # Nothing changed about the plate (but the box may have gotten worse, so keep an earlier one) + continue + self.onLicensePlate(sourceId, frame, lpKey, frameTimestamp, lpString, lpRegion, lpBox) + + + # ------------------------------------------------------------------------------- + def onLicensePlate(self, sourceid, frameid, uid, frameTimestamp, lpString, lpRegion, lpBox): + frameTimestampValue = int(frameTimestamp) + self.lps[uid] = ( sourceid, frameid, lpRegion, lpString, frameTimestamp, lpBox ) + + # ------------------------------------------------------------------------------- + def getLPs(self): + res = [] + for k in self.lps: + lp = self.lps[k] + res.append( { 'time': lp[4], 'string' : lp[3], 'region' : lp[2], 'sourceId' : lp[0], 'box' : lp[5], 'oid' : k, 'frameid' : lp[1] } ) + return res diff --git a/deployment-examples/ALPRDemo/ui/python/requirements.txt b/deployment-examples/ALPRDemo/ui/python/requirements.txt new file mode 100644 index 0000000..ba09c1b --- /dev/null +++ b/deployment-examples/ALPRDemo/ui/python/requirements.txt @@ -0,0 +1,3 @@ +opencv-python +wxpython +numpy \ No newline at end of file diff --git a/deployment-examples/ClientLib/lib/MCP.py b/deployment-examples/ClientLib/lib/MCP.py index 33672ae..fe9871a 100644 --- a/deployment-examples/ClientLib/lib/MCP.py +++ b/deployment-examples/ClientLib/lib/MCP.py @@ -37,7 +37,7 @@ def get_stats(self, source_id): return self.get(url).json() # curl mcp:9097/hlsfs/source//image/ - def get_image(self, source_id, image): + def get_image(self, source_id, image, format="numpy"): url = f"http://{self.host}:{self.port}/hlsfs/source/{source_id}/image/{image}" response = self.get(url) @@ -48,6 +48,8 @@ def get_image(self, source_id, image): raise Exception("Error downloading image", response.status_code) else: # Convert image to numpy array + if format == "source": + return response.content img = Image.open(BytesIO(response.content)) arr = np.array(img) return arr @@ -149,7 +151,7 @@ def get_m3u8(self, source_id, start, end): raise Exception("Error downloading HLS:", url, ":", response.status_code) else: return response.text - + # curl mcp:9097/hlsfs/source//...m3u8 def get_m3u8_playlist(self, source_id, start, end): import m3u8 diff --git a/docs/schemas/anypipe/anypipe.html b/docs/schemas/anypipe/anypipe.html index ca56e51..8229510 100644 --- a/docs/schemas/anypipe/anypipe.html +++ b/docs/schemas/anypipe/anypipe.html @@ -1 +1 @@ - Sighthound Analytics

Sighthound Analytics

Type: object

Analytics data sent by the Sighthound video/image analysis pipeline. This data is sent based on configuration when the number of detected objects or attributes of detected objects changes, the confidence of detected objects or their attributes improves, or a configurable timeout occurs.

No Additional Properties

Type: object

Type: integer

Timestamp the frame corresponding to this analytics data was processed at, in milliseconds since the epoch and GMT timezone.

Value must be greater or equal to 0

Type: string

A global unique ID representing the media source, for
instance a specific video stream from a camera sensor or RTSP feed, , or input source location for images or videos

Type: string

An ID corresponding to this frame, which may be used to
access the image corresponding to all box coordinates and object
detections represented in this object, via the Media Service API.

Type: object

The dimensions (width and height) of the frame represented by frameId. Also used as the coordinate base for all bounding box coordinates.

Type: number

Width in pixels

Value must be greater or equal to 0

Type: number

Height in pixels

Value must be greater or equal to 0

Type: integer

Timestamp of the frame corresponding to this analytics data, acccording to the source, in milliseconds since the epoch and GMT timezone.

Value must be greater or equal to 0

Type: string

Type: object

Meta classes include objects such as vehicles, license plates, and people. These are high-level classifications.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: object

An plural MetaClass name. Supported MetaClasses
include:
vehicles - Objects including cars, buses, trucks, motorbikes.
Vehicles include objects which may potentially include license
plates, may include links to licensePlates.
licensePlates - Objects which are detected/classified as license plates.
people - Pedestrians or people riding skateboards, electric
scooter, wheelchairs,etc.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: object

A Unique ID representing this object, used to map
additional object properties. This ID is guaranteed unique
for each object, regardless of streamId. It will change the object drops out of
detection/tracking

Type: integer

The analyticsTimestamp with highest confidence score for this object.

Value must be greater or equal to 0

Type: string

Object specific class returned by the model. For objects of the vehicles metaclass this may include car, truck, bus, motorbike, etc based on model capabilities

Type: object

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: object

A map of attributes for this object. Not all atributes are supported for all object types. Example attributes include:
color - The color of an object
lpString - A string representing license plate text
and numbers
lpRegion - A string representing license plate region
vehicleType - Make model and generation of the vehicle in a single string

No Additional Properties

Type: number

Confidence score for attribute detection, ranging from 0.0 to 1.0. A score of 1.0 indicates 100% confidence.

Value must be greater or equal to 0 and lesser or equal to 1

Type: number

Confidence score for object detection, ranging from 0.0 to 1.0. A score of 1.0 indicates 100% confidence.When included in an attribute, this score represents the
object Detection score for the parent object corresponding to the
timestamp when the attribute value was determined.

Value must be greater or equal to 0 and lesser or equal to 1

Type: boolean

Flag to indicate if the attribute is updated. True means updated, False means not updated.


A value of the attribute. The value is specific to the attribute type.

Type: object

Information about the detected vehicle, including its make, model, and generation.

Type: string

The manufacturer of the detected vehicle, e.g., 'Toyota'.

Type: string

The specific model of the detected vehicle, e.g., 'Camry'.

Type: string

The generation or variant of the detected vehicle, e.g., '2020'.

Type: string

The category to which the detected vehicle belongs, e.g., 'Sedan'.

Additional Properties of any type are allowed.

Type: object

Type: object

Debug information, subject to change
between releases. Do not use this object in an
application.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: string

Type: string

An object hash which uniquely identifies this object and associated attributes. Will change when attributes change. Reserved for future use

Type: object

The bounding box containing this object, in
pixel coordinates where the top left corner of the
image is represented by pixel 0,0, corresponding to the image referenced by imageRef

No Additional Properties

Type: integer

Height of the bounding box in pixels

Value must be greater or equal to 0

Type: integer

Width of the bounding box in pixels

Value must be greater or equal to 0

Type: integer

X coordinate of the top left corner
of the bounding box.

Value must be greater or equal to 0

Type: integer

Y coordinate of the top left corner of
the bounding box

Value must be greater or equal to 0

Type: number

Confidence score for object detection, ranging from 0.0 to 1.0. A score of 1.0 indicates 100% confidence.When included in an attribute, this score represents the
object Detection score for the parent object corresponding to the
timestamp when the attribute value was determined.

Same definition as detectionScore

Type: boolean

Flag to indicate if the attribute is updated. True means updated, False means not updated.

Same definition as updated

Type: integer

The analyticsTimestamp with highest confidence score for this object.

Value must be greater or equal to 0

Type: object

A map of maps describing an event type.
- The top level map key is a name describing the event type. Supported types are presenceSensor, lineCrossingEvent, speedEvent.
- The sub level map key is a Unique ID representing the event, used to map
additional object properties. This ID is guaranteed unique
for each event for a given stream ID.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: object

A name describing an event type.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: array

A Unique ID representing this event

No Additional Items

Each item of this array must be:


Type: object

Describes an event where one or more objects are present in a region of interest.
The event starts when the first object enters a region of interest. Updates are sent for each change in status, with updateCount incremented for each update. When the last object exits and the region is empty, the sensor event will become immutable and will track the total amount of time at least one object was present in the region of interest. An entry of an object will start a new event and reset the updateCount to 1. Region definitons, object filtering and other items related to sensor definitions are tracked as a part of the sensorId associated with the event.

No Additional Properties

Type: string

The globally unique event ID corresponding to this event.

Type: integer

The total number of objects of a specific type detected within a region of interest, excluding those filtered out based on sensor configuration.

Value must be greater or equal to 0

Type: object

The total number of detected objects in a region grouped by metaclasses.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: integer

The total number of objects detected within a region of interest grouped by metaclass. Metaclasses represent higher-level categories that objects may belong to, such as 'vehicle' or 'people,' while classes represent more specific types, such as 'car' or 'person'.

Value must be greater or equal to 0

Type: object

The total number of detected objects in a region grouped by classes.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: integer

The total number of objects detected within a region of interest grouped by class. For example, if the sensor is configured to detect vehicles, this property may include counts of 'car,' 'bus,' and 'truck'.

Value must be greater or equal to 0

Type: integer

The time in milliseconds since the epoch (GMT) when the event started, or when a link was established.

Value must be greater or equal to 0

Type: integer

The cumulative number of updates sent for this sensor, starting with 1 for the initial update and incremented once for each update sent for each unique sensor event ID. An update refers to a change in the state of the sensor due to a corresponding sensor event (entry, exit, crossing, ...). For sensors which include multiple updates per sensor event (presense sensors), the updateCount will be reset to 1 to indicate the first update for a given event. For sensors (count) which only include 1 update per event, updateCount will be cumulative and count the total number of events per sensor.

Value must be greater or equal to 0

Type: integer

The time in milliseconds since the epoch (GMT) when the event ended.

Value must be greater or equal to 0

Type: object

Describes an event where one object crosses a line

No Additional Properties

Type: string

The globally unique event ID corresponding to this event.

Same definition as eventId

Type: string

The direction of an object's trajectory relative to the sensor's line, with the first point (A) as the pivot point. 'Clockwise' means the object is moving in a clockwise direction relative to the line, while 'counterclockwise' means the object is moving in a counterclockwise direction.

Type: integer

Number of clockwise crossings.

Value must be greater or equal to 0

Type: integer

Number of counterclockwise crossings.

Value must be greater or equal to 0

Type: integer

The time in milliseconds since the epoch (GMT) when the event started, or when a link was established.

Same definition as startedAt

Type: array of object
No Additional Items

Each item of this array must be:

Type: object

Type: string

Media Event type: Ex: image,video

Type: string

Message content

Type: integer

Start of Event Timestamp

Value must be greater or equal to 0

Type: integer

End of Event Timestamp

Value must be greater or equal to 0

Type: string

Message format. Ex: json, jpeg, mp4, ts...

\ No newline at end of file + Sighthound Analytics

Sighthound Analytics

Type: object

Analytics data sent by the Sighthound video/image analysis pipeline. This data is sent based on configuration when the number of detected objects or attributes of detected objects changes, the confidence of detected objects or their attributes improves, or a configurable timeout occurs.

No Additional Properties

Type: object

Type: integer

Timestamp the frame corresponding to this analytics data was processed at, in milliseconds since the epoch and GMT timezone.

Value must be greater or equal to 0

Type: string

A global unique ID representing the media source, for
instance a specific video stream from a camera sensor or RTSP feed, , or input source location for images or videos

Type: string

An ID corresponding to this frame, which may be used to
access the image corresponding to all box coordinates and object
detections represented in this object, via the Media Service API.

Type: object

The dimensions (width and height) of the frame represented by frameId. Also used as the coordinate base for all bounding box coordinates.

Type: number

Width in pixels

Value must be greater or equal to 0

Type: number

Height in pixels

Value must be greater or equal to 0

Type: integer

Timestamp of the frame corresponding to this analytics data, acccording to the source, in milliseconds since the epoch and GMT timezone.

Value must be greater or equal to 0

Type: string

Type: object

Meta classes include objects such as vehicles, license plates, and people. These are high-level classifications.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: object

An plural MetaClass name. Supported MetaClasses
include:
vehicles - Objects including cars, buses, trucks, motorbikes.
Vehicles include objects which may potentially include license
plates, may include links to licensePlates.
licensePlates - Objects which are detected/classified as license plates.
people - Pedestrians or people riding skateboards, electric
scooter, wheelchairs,etc.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: object

A Unique ID representing this object, used to map
additional object properties. This ID is guaranteed unique
for each object, regardless of streamId. It will change the object drops out of
detection/tracking

Type: integer

The analyticsTimestamp with highest confidence score for this object.

Value must be greater or equal to 0

Type: string

Object specific class returned by the model. For objects of the vehicles metaclass this may include car, truck, bus, motorbike, etc based on model capabilities

Type: object

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: object

A map of attributes for this object. Not all atributes are supported for all object types. Example attributes include:
color - The color of an object
lpString - A string representing license plate text
and numbers
lpRegion - A string representing license plate region
vehicleType - Make model and generation of the vehicle in a single string

No Additional Properties

Type: number

Confidence score for attribute detection, ranging from 0.0 to 1.0. A score of 1.0 indicates 100% confidence.

Value must be greater or equal to 0 and lesser or equal to 1

Type: number

Confidence score for object detection, ranging from 0.0 to 1.0. A score of 1.0 indicates 100% confidence.When included in an attribute, this score represents the
object Detection score for the parent object corresponding to the
timestamp when the attribute value was determined.

Value must be greater or equal to 0 and lesser or equal to 1

Type: boolean

Flag to indicate if the attribute is updated. True means updated, False means not updated.


A value of the attribute. The value is specific to the attribute type.

Type: object

Information about the detected vehicle, including its make, model, and generation.

Type: string

The manufacturer of the detected vehicle, e.g., 'Toyota'.

Type: string

The specific model of the detected vehicle, e.g., 'Camry'.

Type: string

The generation or variant of the detected vehicle, e.g., '2020'.

Type: string

The category to which the detected vehicle belongs, e.g., 'Sedan'.

Additional Properties of any type are allowed.

Type: object

Type: object

Debug information, subject to change
between releases. Do not use this object in an
application.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: string

Type: string

An object hash which uniquely identifies this object and associated attributes. Will change when attributes change. Reserved for future use

Type: object

The bounding box containing this object, in
pixel coordinates where the top left corner of the
image is represented by pixel 0,0, corresponding to the image referenced by imageRef

No Additional Properties

Type: integer

Height of the bounding box in pixels

Value must be greater or equal to 0

Type: integer

Width of the bounding box in pixels

Value must be greater or equal to 0

Type: integer

X coordinate of the top left corner
of the bounding box.

Value must be greater or equal to 0

Type: integer

Y coordinate of the top left corner of
the bounding box

Value must be greater or equal to 0

Type: number

Confidence score for object detection, ranging from 0.0 to 1.0. A score of 1.0 indicates 100% confidence.When included in an attribute, this score represents the
object Detection score for the parent object corresponding to the
timestamp when the attribute value was determined.

Same definition as detectionScore

Type: boolean

Flag to indicate if the attribute is updated. True means updated, False means not updated.

Same definition as updated

Type: integer

The analyticsTimestamp with highest confidence score for this object.

Value must be greater or equal to 0

Type: object

A map of maps describing an event type.
- The top level map key is a name describing the event type. Supported types are presenceSensor, lineCrossingEvent, speedEvent.
- The sub level map key is a Unique ID representing the event, used to map
additional object properties. This ID is guaranteed unique
for each event for a given stream ID.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: object

A name describing an event type.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: array

A Unique ID representing this event

No Additional Items

Each item of this array must be:


Type: object

Describes an event where one or more objects are present in a region of interest.
The event starts when the first object enters a region of interest. Updates are sent for each change in status, with updateCount incremented for each update. When the last object exits and the region is empty, the sensor event will become immutable and will track the total amount of time at least one object was present in the region of interest. An entry of an object will start a new event and reset the updateCount to 1. Region definitons, object filtering and other items related to sensor definitions are tracked as a part of the sensorId associated with the event.

No Additional Properties

Type: string

The globally unique event ID corresponding to this event.

Type: integer

The total number of objects of a specific type detected within a region of interest, excluding those filtered out based on sensor configuration.

Value must be greater or equal to 0

Type: object

The total number of detected objects in a region grouped by metaclasses.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: integer

The total number of objects detected within a region of interest grouped by metaclass. Metaclasses represent higher-level categories that objects may belong to, such as 'vehicle' or 'people,' while classes represent more specific types, such as 'car' or 'person'.

Value must be greater or equal to 0

Type: object

The total number of detected objects in a region grouped by classes.

All properties whose name matches the following regular expression must respect the following conditions

Property name regular expression: ^.*$
Type: integer

The total number of objects detected within a region of interest grouped by class. For example, if the sensor is configured to detect vehicles, this property may include counts of 'car,' 'bus,' and 'truck'.

Value must be greater or equal to 0

Type: integer

The time in milliseconds since the epoch (GMT) when the event started, or when a link was established.

Value must be greater or equal to 0

Type: integer

The cumulative number of updates sent for this sensor, starting with 1 for the initial update and incremented once for each update sent for each unique sensor event ID. An update refers to a change in the state of the sensor due to a corresponding sensor event (entry, exit, crossing, ...). For sensors which include multiple updates per sensor event (presense sensors), the updateCount will be reset to 1 to indicate the first update for a given event. For sensors (count) which only include 1 update per event, updateCount will be cumulative and count the total number of events per sensor.

Value must be greater or equal to 0

Type: integer

The time in milliseconds since the epoch (GMT) when the event ended.

Value must be greater or equal to 0

Type: object

Describes an event where one object crosses a line

No Additional Properties

Type: string

The globally unique event ID corresponding to this event.

Same definition as eventId

Type: string

The direction of an object's trajectory relative to the sensor's line, with the first point (A) as the pivot point. 'Clockwise' means the object is moving in a clockwise direction relative to the line, while 'counterclockwise' means the object is moving in a counterclockwise direction.

Type: integer

Number of clockwise crossings.

Value must be greater or equal to 0

Type: integer

Number of counterclockwise crossings.

Value must be greater or equal to 0

Type: integer

The time in milliseconds since the epoch (GMT) when the event started, or when a link was established.

Same definition as startedAt

Type: array of object
No Additional Items

Each item of this array must be:

Type: object

Type: string

Media Event type: Ex: image,video

Type: string

Message content

Type: integer

Start of Event Timestamp

Value must be greater or equal to 0

Type: integer

End of Event Timestamp

Value must be greater or equal to 0

Type: string

Message format. Ex: json, jpeg, mp4, ts...

\ No newline at end of file