S&I is a leading building management service provided in Korea. S&I has deployed the Scenera PaaS to enable real time monitoring of events within S&I managed locations using cameras located in the buildings. The solution is capable of detecting events such as Intrusion, people falling, people fighting, analyzing X Ray images from bags for unauthorized devices, loitering and left behind items. Real time notifications of these events are sent to security personnel who can then respond to these events.
Visit Microsoft Customer Stories for more information on “Achieving Edge-to-Cloud Processing forDX with AI Capabilities in Facility Management”.
Processing of camera fields is computationally intensive. Without Edge processing on the premises, the bandwidth requirements to deliver many live video feeds to the cloud becomes expensive. Alternatively, on Premise Edge computing can be applied to the live video feeds. This can be computationally intensive and require a significant deployment of hardware to handle all the camera feeds.
The question arises whether it is possible to distribute the processing load between the Cloud and the On Premises Edge so that a light processing is performed at the edge to limit the traffic from the Edge to the Cloud. The light processing algorithm is far less certain in its results than an intensive algorithm. This can be compensated for by using an intensive algorithm in the cloud which can filter out results generated by the light processing algorithm that are less certain.
Scenera, TNM and Microsoft worked together to deploy an architecture that allowed the distributed processing of people counting. This entailed running constrained algorithms which are highly inaccurate on the edge in the customer’s premises. This enables either lower cost hardware to be used or a larger number of cameras to be processed on the edge processing hardware.
The results from the constrained algorithm have a large number of errors. To overcome this the results with a lower certainty are forwarded to a Network Edge Cluster with a more robust accurate algorithm. This algorithm is used to confirm whether the constrained algorithm has correctly detected a person or not.
The benefit of this approach is that it reduces the hardware costs on premises, reduces the bandwidth of video sent to the cloud while maintaining the highest possible accuracy of the results.
We deployed and measured the solution in the S&I facility. We measured the results of the solution by applying the solution to four camera feeds and counting people entering turnstiles. We measured the accuracy of the solution by monitoring the results generated by the camera and compared them to events that we were able to visibly inspect. Based on these measurements our accuracy was greater than 95% based on a relatively small sample of 100 images.
- Compared to a solution that shipped the encoded video to the cloud, the Edge Video System’s (EVS) split architecture transferred 22X lower data to the cloud.
- EVS’s on-prem edge usage was minimal and only CPU-based; no GPU was used on the on-prem edge. Also, for GPU usage, our cascaded pipeline reduced the need for the heavy AI model by two-thirds.
- The overall latency of the solution was a few 100s’ of milliseconds.
The distributed architecture that was implemented demonstrated a significant reduction in network bandwidth, a reduction of edge hardware costs while maintaining a high degree of accuracy with low latency.
With all the initial testing and installations of the system at LGE’s buildings, S&I is ready to commercialize EVS in its building management application globally.
This article was published on Microsoft Customer Stories on November 1st, 2022.