Skip to content

Commit

Permalink
v1.5.6
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Mar 22, 2024
1 parent b7617bd commit 4ccd8e5
Show file tree
Hide file tree
Showing 19 changed files with 183 additions and 64 deletions.
18 changes: 15 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -250,18 +250,30 @@ For more advanced options visit [VehicleAnalytics Documentation](https://dev.sig
## Changing Docker env variables
If you need to modify the `.env` file of a service, you can either `./scripts/sh-services edit all` or create a new `.env file` like this
The `.env` [file](https://docs.docker.com/compose/environment-variables/set-environment-variables/#substitute-with-an-env-file) is generated by the sh-services script at runtime. To modify the environment via CLI, you can either run `./scripts/sh-services edit <service>`, or add a user-specific file with .env extension in sio/conf/, for example `<service>/conf/user.env` or `<service>/conf/0009-debug.env` for example:
```bash
echo "SIO_DOCKER_TAG=r221202" > sio/conf/0009-debug.env
echo "MY_VARIABLE=24" > sio/conf/user.env
echo "SIO_DOCKER_TAG=r240318" > sio/conf/0009-debug.env
```
and then update the services:
And then update the services (create the .env file for docker-compose) by running:
```bash
./scripts/sh-services merge all
```
### Modyfing SIO release version
In the instance you need to change the release version of SIO.
Execute `./scripts/sh-services edit sio`, then select `Edit service (.env)`, find the variable `SIO_DOCKER_TAG` and finally set it to whatever value you need and then save the file.
That would create a `sio/conf/0001-edit.env` file containing your edits while keeping the `sio/conf/default.env` intact.
The result would be stored in `sio/.env` file with the merged contents of `default.env` and `0001-edit.env`.
Being `0001-edit.env` of higher ranking than the default file. The order is defined by UNIX, being the character `0` of `0001-edit.env` first than the `d` of `default`.
## Deployment
```bash
Expand Down
5 changes: 5 additions & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
# Release Notes

## v1.5.6
- Update SIO to r240318
- Update .env editing for better understanding (add banners and more)
- Remove SIO images when disk is full

## v1.5.5
- Initial version of on-demand analytics sample
- Sighthound REST API Gateway - Docker Compose Updates
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
v1.5.5
v1.5.6
2 changes: 1 addition & 1 deletion configurations/camera.conf
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,5 @@ select_example sio camera
up live555
up rabbitmq
up mcp
test_rtsp_stream rtsp://sh-camera-rtsp:8555/live 5
test_rtsp_stream rtsp://localhost:8555/live 5
restart sio
2 changes: 1 addition & 1 deletion configurations/countSensors.conf
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,5 @@ select_example sio count-sensor-nomedia
up live555
up rabbitmq
up mcp
test_rtsp_stream rtsp://live555/StreetVideo1.mkv 5
test_rtsp_stream rtsp://localhost/StreetVideo1.mkv 5
restart sio
2 changes: 1 addition & 1 deletion configurations/fakeRTSP-nomedia.conf
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,5 @@ select_example sio live555-nomedia
up live555
up rabbitmq
up mcp
test_rtsp_stream rtsp://live555/StreetVideo1.mkv 5
test_rtsp_stream rtsp://localhost/StreetVideo1.mkv 5
restart sio
2 changes: 1 addition & 1 deletion configurations/fakeRTSP.conf
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,5 @@ select_example sio live555
up live555
up rabbitmq
up mcp
test_rtsp_stream rtsp://live555/StreetVideo1.mkv 5
test_rtsp_stream rtsp://localhost/StreetVideo1.mkv 5
restart sio
2 changes: 1 addition & 1 deletion configurations/selectFileRTSP.conf
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,5 @@ select_example sio file-rtsp
up live555
up rabbitmq
up mcp
test_rtsp_stream rtsp://live555/data/my-video.mkv 5
test_rtsp_stream rtsp://localhost/data/my-video.mkv 5
restart sio
8 changes: 4 additions & 4 deletions deployment-examples/SIOOnDemandAnalytics/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version: "3"
version: "2.3"
services:

# By default pipelines.json will point to streams served by this container.
Expand All @@ -13,11 +13,11 @@ services:

# The actual analytics container
analytics:
image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:${SIO_RELEASE-r240117}${SIO_DOCKER_TAG_VARIANT-}
image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:${SIO_RELEASE-r240318}${SIO_DOCKER_TAG_VARIANT-}
restart: unless-stopped
environment:
# Location where SIO will place generated model engine files
- SIO_DATA_DIR=/data/sio-cache
- SIO_DATA_DIR=/data/.sio
- PYTHONUNBUFFERED=1
# Container runtime defaults to `runc` if SIO_DOCKER_RUNTIME not set. Use `nvidia` if GPU is installed.
runtime: ${SIO_DOCKER_RUNTIME-runc}
Expand All @@ -27,7 +27,7 @@ services:
- ./config:/config:ro
# Writable shared folder for data exchange with host
# We'll use it for storing the generated model files, data exchange folder, etc.
- ./data:/data
- ${HOME-./data}:/data
# Shared memory-backed folder for data exchange with other containers
- runvol:/tmp/runvol
entrypoint:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ version: "3"
services:

analytics:
image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:${SIO_RELEASE-r240117}${SIO_DOCKER_TAG_VARIANT-}
image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:${SIO_RELEASE-r240318}${SIO_DOCKER_TAG_VARIANT-}
restart: unless-stopped
environment:
# Location where SIO will place generated model engine files
- SIO_DATA_DIR=/data/sio-cache
- SIO_DATA_DIR=/data/.sio
# Container runtime defaults to `runc` if SIO_DOCKER_RUNTIME not set. Use `nvidia` if GPU is installed.
runtime: ${SIO_DOCKER_RUNTIME-runc}
volumes:
Expand All @@ -15,7 +15,7 @@ services:
- ./config:/config:ro
# Writable shared folder for data exchange with host
# We'll use it for storing the generated model files, data exchange folder, etc.
- ./data:/data
- ${HOME-./data}:/data
# Shared memory-backed folder for data exchange with other containers
- run_vol:/tmp/inputFiles
entrypoint:
Expand All @@ -40,7 +40,7 @@ services:
# Overrides default config
- ./config/gateway/service.json:/cloudvx/config/local.json:ro
# Writable shared folder for data exchange with host
- ./data:/data
- ${HOME-./data}:/data
# Shared memory-backed folder for data exchange with other containers
- run_vol:/tmp/inputFiles
depends_on:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ services:


analytics:
image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:${SIO_RELEASE-r231204}${SIO_DOCKER_TAG_VARIANT}
image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:${SIO_RELEASE-r240318}${SIO_DOCKER_TAG_VARIANT}
restart: unless-stopped
environment:
# Location where SIO will place generated model engine files
- SIO_DATA_DIR=/data/sio-cache
- SIO_DATA_DIR=/data/.sio
# We need this to see output from Python extension module
- PYTHONUNBUFFERED=1
# Container runtime defaults to `runc` if SIO_DOCKER_RUNTIME not set. Use `nvidia` if GPU is installed.
Expand All @@ -28,7 +28,7 @@ services:
- ./config:/config:ro
# Writable shared folder for data exchange with host
# We'll use it for storing the generated model files, data exchange folder, etc.
- ./data:/data
- ${HOME-./data}:/data
entrypoint:
- /sighthound/sio/bin/runPipelineSet
# Pipeline configuration file
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ version: "2.3"
services:

analytics:
image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:${SIO_RELEASE-r231204}${SIO_DOCKER_TAG_VARIANT}
image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:${SIO_RELEASE-r240318}${SIO_DOCKER_TAG_VARIANT}
restart: unless-stopped
environment:
# Location where SIO will place generated model engine files
- SIO_DATA_DIR=/data/sio-cache
- SIO_DATA_DIR=/data/.sio
# We need this to see output from Python extension module
- PYTHONUNBUFFERED=1
# Container runtime defaults to `runc` if SIO_DOCKER_RUNTIME not set. Use `nvidia` if GPU is installed.
Expand All @@ -17,7 +17,7 @@ services:
- ./config:/config:ro
# Writable shared folder for data exchange with host
# We'll use it for storing the generated model files, data exchange folder, etc.
- ./data:/data
- ${HOME-./data}:/data
entrypoint:
- /sighthound/sio/bin/runPipelineSet
# Pipeline configuration file
Expand Down
6 changes: 3 additions & 3 deletions deployment-examples/VideoStreamsConsumer/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,12 +31,12 @@ services:

# The SIO analytics container, consuming the streams and analyzing them
analytics_svc:
image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:${SIO_RELEASE-r231204}${SIO_DOCKER_TAG_VARIANT}
image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:${SIO_RELEASE-r240318}${SIO_DOCKER_TAG_VARIANT}
container_name: sample-sio
restart: unless-stopped
environment:
# Location where SIO will place generated model engine files
- SIO_DATA_DIR=/data/sio-cache
- SIO_DATA_DIR=/data/.sio
# Container runtime defaults to `runc` if SIO_DOCKER_RUNTIME not set. Use `nvidia` if GPU is installed.
runtime: ${SIO_DOCKER_RUNTIME-runc}
volumes:
Expand All @@ -45,7 +45,7 @@ services:
- ./config:/config:ro
# Writable shared folder for data exchange with host / other containers.
# We'll use it for storing the generated model files, data exchange folder, etc.
- ./data:/data
- ${HOME-./data}:/data
entrypoint:
- /sighthound/sio/bin/runPipelineSet
# Pipeline configuration file
Expand Down
12 changes: 6 additions & 6 deletions deployment-examples/VideoStreamsRecorder/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,11 +38,11 @@ services:
mem_reservation: 512M
volumes:
# Location of recorded media; should match that specified for SIO's pipeline configuration
- ./data/media:/data/sighthound/media:rw
- ${HOME-./data}/media:/data/sighthound/media:rw
# Location for MCP logs
- ./data/logs/mcp:/data/sighthound/logs/mcp:rw
- ${HOME-./data}/logs/mcp:/data/sighthound/logs/mcp:rw
# Location of MCP database
- ./data/mcp/db:/data/sighthound/db:rw
- ${HOME-./data}/mcp/db:/data/sighthound/db:rw
# MCP configuration
- ./config/mcp/mcp.yml:/etc/mcpd/default.json:ro
ports:
Expand All @@ -52,12 +52,12 @@ services:

# The SIO analytics container, consuming the streams and analyzing them
analytics_svc:
image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:${SIO_RELEASE-r231204}${SIO_DOCKER_TAG_VARIANT}
image: us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:${SIO_RELEASE-r240318}${SIO_DOCKER_TAG_VARIANT}
container_name: sample-sio
restart: unless-stopped
environment:
# Location where SIO will place generated model engine files
- SIO_DATA_DIR=/data/sio-cache
- SIO_DATA_DIR=/data/.sio
# Container runtime defaults to `runc` if SIO_DOCKER_RUNTIME not set. Use `nvidia` if GPU is installed.
runtime: ${SIO_DOCKER_RUNTIME-runc}
volumes:
Expand All @@ -66,7 +66,7 @@ services:
- ./config:/config:ro
# Writable shared folder for data exchange with host / other containers.
# We'll use it for storing the generated model files, data exchange folder, etc.
- ./data:/data
- ${HOME-./data}:/data
entrypoint:
- /sighthound/sio/bin/runPipelineSet
# Pipeline configuration file
Expand Down
2 changes: 1 addition & 1 deletion docs/schemas/anypipe/anypipe.html

Large diffs are not rendered by default.

23 changes: 17 additions & 6 deletions examples/MCPEvents/MCPEvents.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
import datetime
from pathlib import Path
import m3u8
from cachetools import TTLCache

class MCPEvents:
def get_args(self, args):
Expand Down Expand Up @@ -42,6 +43,8 @@ def __init__(self, args):
self.current_event_seg = {}
# A dict of lists of completed event segments by sourceId, waiting to be written to disk when video is available
self.completed_event_seg = {}
# A dict of TTLCache objects representing media events, which expire automatically when not used.
self.video_cache = {}
# Group events into a single segment when separated
# by less than this number of milliseconds. Only valid when use_events is not specified.
self.group_events_separation_ms = 5*1000
Expand Down Expand Up @@ -103,6 +106,12 @@ def event_segment_complete(self, source, event_segment):
video_name = filepath_ts.relative_to(filepath_ts.parent.parent)
print(f"Downloading {video_name}")
self.mcp_client.download_video(source, video_name, filepath_ts)
if source in self.video_cache and str(video_name) in self.video_cache[source]:
event_segment.videos.append(self.video_cache[source][str(video_name)])
else:
print(f"Could not find {video_name} in video cache for {source}")


vidfile = dirpath / Path(f"{filename_base}.m3u8")
print(f"Writing {vidfile}")
with open(vidfile, "w") as file:
Expand All @@ -122,14 +131,16 @@ def handle_media_event_callback(self, media_event, sourceId):
# If the media event is a video_file_closed event, add it to the current event segment
# for the source ID, or to the completed event segments if it's already completed
if type == "video_file_closed":
if sourceId in self.current_event_seg:
event_seg = self.current_event_seg[sourceId]
event_seg.videos.append(media_event)
if not sourceId in self.video_cache:
self.video_cache[sourceId] = TTLCache(maxsize=100, ttl=60*2)
self.video_cache[sourceId][msg] = media_event
completed_event_segments = self.completed_event_seg.get(sourceId, [])
for event_seg in completed_event_segments:
event_seg.videos.append(media_event)
self.event_segment_complete(sourceId, event_seg)
self.completed_event_seg[sourceId].remove(event_seg)
try:
self.event_segment_complete(sourceId, event_seg)
finally:
# Always complete the event segment, even if we couldn't download vids
self.completed_event_seg[sourceId].remove(event_seg)



Expand Down
32 changes: 27 additions & 5 deletions examples/lib/MCP.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
from PIL import Image
import numpy as np
from io import BytesIO
from requests.adapters import HTTPAdapter, Retry

class MCPClient:
def __init__(self, conf):
Expand All @@ -14,12 +15,33 @@ def __init__(self, conf):
else:
print(f"Connecting to mcp://{self.host}:{self.port}")

def get(self, url):
# See https://www.peterbe.com/plog/best-practice-with-retries-with-requests
def requests_retry_session(
retries=2,
backoff_factor=0.3,
status_forcelist=(500, 502, 504),
session=None,):

session = session or requests.Session()
retry = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
return session

def get(self, url, timeout=5):
if self.user and self.password:
auth = (self.user, self.password)
else:
auth = None
response = requests.get(url, auth=auth)
# Retry periodic timeouts, abort any response over 5 seconds
response = self.requests_retry_session().get(url, auth=auth, timeout=timeout)

if response.status_code == 401:
raise Exception("Unauthorized")
Expand Down Expand Up @@ -55,7 +77,7 @@ def get_image(self, source_id, image):
# curl mcp:9097/hlsfs/source/<source_id>/segment/<video>
def download_video(self, source_id, video, filepath):
url = f"http://{self.host}:{self.port}/hlsfs/source/{source_id}/segment/{video}"
response = self.get(url)
response = self.get(url, timeout=5*60)

if response.status_code != 200:
if response.status_code == 404:
Expand All @@ -70,7 +92,7 @@ def download_video(self, source_id, video, filepath):
# curl mcp:9097/hlsfs/source/<source_id>/segment/<segment>
def get_segment(self, source_id, segment):
url = f"http://{self.host}:{self.port}/hlsfs/source/{source_id}/segment/{segment}"
response = self.get(url)
response = self.get(url, timeout=5*60)

if response.status_code != 200:
if response.status_code == 404:
Expand Down Expand Up @@ -149,7 +171,7 @@ def get_m3u8(self, source_id, start, end):
raise Exception("Error downloading HLS:", url, ":", response.status_code)
else:
return response.text

# curl mcp:9097/hlsfs/source/<source_id>/<start>..<end>.m3u8
def get_m3u8_playlist(self, source_id, start, end):
import m3u8
Expand Down
Loading

0 comments on commit 4ccd8e5

Please sign in to comment.