How we implemented security for our Smart House
With requirements outlined as explained in my previous article, we were set out to implement the first set of requirements for our next-gen smart house project.
Before diving deep into the implementation details, I should mention that we used an incremental approach to prioritize our requirements and manage possible over-budget risks during implementation. This means, although we were planning for a full set of requirements from the outset, the implementation was planned carefully in a way to allow partial and incremental implementation. This approach allowed us to defund the project at any stage, should there be a need to prioritize other investments or plans, without risk of losing all the value intended from this investment.
The highest priority was given to security requirements, and following is how we implemented it.
Continuous Surveillance with AI & Amazon Rekognition:
Geofencing with AWS Location Services:
Motorized Doors/Windows with AWS IoT Core:
Intruder Detection with Guard Dogs:
Disclaimer: This solution component might raise legal and ethical concerns in your area. In some jurisdictions owning and utilizing guard dogs for security purposes is highly regulated and requires proper training and handling. It's best to consult with local regulations and professional security providers before implementing this.
We used the level 3 alarm triggered by AWS Rekognition, combined with explicit authorization of an owner through mobile application to initiate our most robust threat response procedure, including unleashing guard dogs and playing the audio with which dogs are trained to engage intruders and ward them off of the property.
Going back to how we process camera feed and send it to the cloud, generally speaking we had two options:
Many camera devices come with their own proprietary solutions for processing camera feed. But I wanted a solution that can work with any camera. Furthermore, I wanted to centrally handle security and avoid using long-living keys.
Therefore, we decided to connect all camera feeds into our central hub acting as a gateway, wherefrom video streams are forwarded to Kinesis Video Streams using KVS producer library.
This central hub itself is connected to AWS IoT Core for access management and receives temporary credentials with allow access on KVS PutRecords action.
This solution worked perfectly! But we also recognized the central hub is constituting a single point of failure. This is a significant security risk!
Recommended by LinkedIn
To palliate that risk, we allowed user to assign priority labels on security cameras. In case of detecting a failure on the central-hub of on-premises, an EC2 instance would automatically start running in the cloud, acting as a failover substitute for the main central-hub. The EC2 instance itself sits inside a private subnet, along with a Security Group attached to it which blocks any outside access. Camera feeds are then forwarded to this EC2 instance through a Network Load Balancer that only receives connections from our predefined IP and port number and relays them to our back-end EC2 instance, which is configured with the same capabilities as the central-hub. We're blocking any other connections to the EC2 instance, including any shell connections. In case needed, shell connections are possible through AWS SSM Session Manager.
The difference here is that since the raw feed from cameras might be larger than what is normally sent to KVS endpoint, we need to implement QoS (Quality of Service) at networking level to prioritize camera feeds with higher number labels assigned to them. This way we're keeping the security system functional in case of failure at the central-hub on-premises.
Cost vs. Security Trade-offs
We initially considered using an EC2 instance as the primary central hub, which would have simplified the setup compared to maintaining an on-premises client requiring OS hardening and regular updates. However, sending raw video feeds to the cloud from the EC2 instance would have been cost-prohibitive due to bandwidth consumption.
To achieve a balance between cost and functionality, we decided to use the EC2 instance as a failover for the on-premises client. This client can reduce video resolution and frame rates based on real-time needs, significantly reducing bandwidth usage when compared to sending raw feeds. Additionally, this approach allows for future upgrades with edge processing capabilities on the client itself, further minimizing latency and internet bandwidth consumption.
Data Privacy
Video data from KVS is directly fed to AWS Rekognition for processing. Data transfer from cameras to the central hub, and from the central hub to the KVS service is encrypted in transit.
In case of anomaly detection, a copy of video feed is stored on S3 for future forensic review. Duration to keep these copies in case of each level of anomaly (levels 1, 2, and 3) is configurable by user. Nonetheless, we decided as well to enforce a bucket policy to automatically delete objects older than 3 months. The user is also able to define how long before and after an anomaly should be recorded on S3.
All the data stored on S3 is encrypted at rest using AWS KMS keys.
Scalability
While this solution is specifically designed for a single house, it can be easily expanded to cover much larger properties, such as industrial complexes or multi-unit buildings. The system allows us to define separate zones for each unit or section of a complex. Cameras and other devices (smart gas and water regulators, electricity switches, etc.) can be tagged to fall under their own designated zones. Each zone can then have its own lock-down and welcome procedures defined, catering to its specific requirements.
Importantly, even with zone-based configurations, core functionalities like video processing and anomaly detection remain centralized. This ensures consistent security measures and simplifies overall management. For very large deployments, data storage strategies might require adjustments, such as partitioning data based on zones.
Future-proofing
Our design choices were made with a focus on future-proofing the system. We leverage industry standards (e.g., MQTT) for IoT messaging, which avoids vendor lock-in and simplifies integration with future devices. Additionally, our central hub preprocesses video feeds before sending them to Kinesis Video Streams (KVS) using the KVS producer library. This approach allows for flexibility in future upgrades, as the central hub can be modified without impacting the core functionality of video processing and storage.
We've also implemented an asynchronous, decentralized, and event-based architecture. This means different parts of the system operate independently and communicate through events. This approach offers several advantages:
Conclusion
That's how we implemented the security requirements for our smart house. We hope this explanation provided valuable insights into the design considerations and trade-offs involved. In the comments below, share what security features you prioritize most in a smart home.
Stay tuned for the next article, where we'll delve into how we implemented our safety requirements, including fire management, gas and water leak management for a truly safe and smart home environment.
System Architecture, Software Engineer: (Full Stack .Net Developer | C# | SQL Programming | BI Dashboard) 💠→ Network Engineer: (CCNA,CCNP,MCSE,VMWare,Linux)
9mo#Baytec , Super smart houses, IoT, AI/ML, next-gen Smart House Dear friend, Sepehr Samiei, Good luck