How we implemented security for our Smart House

How we implemented security for our Smart House

With requirements outlined as explained in my previous article, we were set out to implement the first set of requirements for our next-gen smart house project.

Before diving deep into the implementation details, I should mention that we used an incremental approach to prioritize our requirements and manage possible over-budget risks during implementation. This means, although we were planning for a full set of requirements from the outset, the implementation was planned carefully in a way to allow partial and incremental implementation. This approach allowed us to defund the project at any stage, should there be a need to prioritize other investments or plans, without risk of losing all the value intended from this investment.

The highest priority was given to security requirements, and following is how we implemented it.


Continuous Surveillance with AI & Amazon Rekognition:

  1. IoT Cameras: We installed high-definition cameras throughout the home, both indoors and outdoors. These cameras continuously stream video feeds to a central processing hub. I'll explain in a bit how therefrom we transfer the data to the cloud for processing.
  2. Amazon Rekognition: This AI service analyzes the video feeds for anomalies. We defined anomaly parameters such as people entering restricted areas, unusual movement patterns, or objects appearing/disappearing unexpectedly.
  3. Alerts and Actions: When Rekognition detects an anomaly, through Rekognition Streaming Events it sends Push notifications to an AWS SNS topic. Based on severity of anomalies, various procedures subscribed to this topic react to it. In our case, we defined three levels of severity. Level 1, logging the anomaly for possible future analysis without any other immediate action. Level 2, do the logging and also send an alert message to owner's phone. Level 3, do the logging and send alert to owner's phone and also activate the lockdown procedure (explained below).

Geofencing with AWS Location Services:

  1. Occupant Smartphones: There are a number of good geofencing apps already available out there that can be used for this purpose. However, we decided to add this functionality to our own custom smart home management app. In either way, the functionality remains somewhat the same with minor differences. Our app provided three modes. The GPS mode used GPS location data to understand when an occupant enters or leaves the house perimeter. While the alternative WiFi mode uses connect or disconnect events for that purpose. The third mode is a combination of both, prioritizing WiFi connection if mobile WiFi is on, and otherwise relying on GPS data. It's also worth mentioning that occupants are able to use a manually entered password at the entrance level to explicitly sign in or out of the house. In this case, a notification is sent to occupant phone and the geofencing feature is disabled until they manually re-enable it. This procedure safeguards against possible difficulties due to any phone or network malfunctioning.
  2. AWS Location Services: we used AWS Location Service with our mobile app to define geofence around our property in the GPS mode.
  3. Automated Lockdown/Welcome: When an occupant's phone enters the geofence (approaching home), the system initiates the "welcome procedure" - unlocking doors, adjusting lights, enhancing room temperature, etc. (pre-programmed routine). Conversely, if no occupant's phone is within the geofence for a configurable time, the lockdown procedure is initiated - shutting and locking doors and windows, stopping flow of gas and water through pipes, disconnecting non-continuous electricity nodes, etc.

Motorized Doors/Windows with AWS IoT Core:

  1. Smart Door/Window Locks & Motors: We installed smart locks and motorized mechanisms for windows and doors that can be controlled remotely.
  2. AWS IoT Core: we then set up AWS IoT Core, a managed service that connects IoT devices to the cloud securely, and integrated our smart locks and window motors with AWS IoT Core.
  3. Lockdown Procedure: When a lockdown is triggered (due to anomaly detection or geofence vacancy), AWS IoT Core sends commands to lock all doors and shut windows.

Intruder Detection with Guard Dogs:

Disclaimer: This solution component might raise legal and ethical concerns in your area. In some jurisdictions owning and utilizing guard dogs for security purposes is highly regulated and requires proper training and handling. It's best to consult with local regulations and professional security providers before implementing this.

We used the level 3 alarm triggered by AWS Rekognition, combined with explicit authorization of an owner through mobile application to initiate our most robust threat response procedure, including unleashing guard dogs and playing the audio with which dogs are trained to engage intruders and ward them off of the property.


Going back to how we process camera feed and send it to the cloud, generally speaking we had two options:

  1. Directly sending data from each camera to the cloud.
  2. Using a central hub as interim stage between camera devices and cloud services.

Many camera devices come with their own proprietary solutions for processing camera feed. But I wanted a solution that can work with any camera. Furthermore, I wanted to centrally handle security and avoid using long-living keys.

Therefore, we decided to connect all camera feeds into our central hub acting as a gateway, wherefrom video streams are forwarded to Kinesis Video Streams using KVS producer library.

This central hub itself is connected to AWS IoT Core for access management and receives temporary credentials with allow access on KVS PutRecords action.

This solution worked perfectly! But we also recognized the central hub is constituting a single point of failure. This is a significant security risk!

To palliate that risk, we allowed user to assign priority labels on security cameras. In case of detecting a failure on the central-hub of on-premises, an EC2 instance would automatically start running in the cloud, acting as a failover substitute for the main central-hub. The EC2 instance itself sits inside a private subnet, along with a Security Group attached to it which blocks any outside access. Camera feeds are then forwarded to this EC2 instance through a Network Load Balancer that only receives connections from our predefined IP and port number and relays them to our back-end EC2 instance, which is configured with the same capabilities as the central-hub. We're blocking any other connections to the EC2 instance, including any shell connections. In case needed, shell connections are possible through AWS SSM Session Manager.

The difference here is that since the raw feed from cameras might be larger than what is normally sent to KVS endpoint, we need to implement QoS (Quality of Service) at networking level to prioritize camera feeds with higher number labels assigned to them. This way we're keeping the security system functional in case of failure at the central-hub on-premises.

Cost vs. Security Trade-offs

We initially considered using an EC2 instance as the primary central hub, which would have simplified the setup compared to maintaining an on-premises client requiring OS hardening and regular updates. However, sending raw video feeds to the cloud from the EC2 instance would have been cost-prohibitive due to bandwidth consumption.

To achieve a balance between cost and functionality, we decided to use the EC2 instance as a failover for the on-premises client. This client can reduce video resolution and frame rates based on real-time needs, significantly reducing bandwidth usage when compared to sending raw feeds. Additionally, this approach allows for future upgrades with edge processing capabilities on the client itself, further minimizing latency and internet bandwidth consumption.

Data Privacy

Video data from KVS is directly fed to AWS Rekognition for processing. Data transfer from cameras to the central hub, and from the central hub to the KVS service is encrypted in transit.

In case of anomaly detection, a copy of video feed is stored on S3 for future forensic review. Duration to keep these copies in case of each level of anomaly (levels 1, 2, and 3) is configurable by user. Nonetheless, we decided as well to enforce a bucket policy to automatically delete objects older than 3 months. The user is also able to define how long before and after an anomaly should be recorded on S3.

All the data stored on S3 is encrypted at rest using AWS KMS keys.

Scalability

While this solution is specifically designed for a single house, it can be easily expanded to cover much larger properties, such as industrial complexes or multi-unit buildings. The system allows us to define separate zones for each unit or section of a complex. Cameras and other devices (smart gas and water regulators, electricity switches, etc.) can be tagged to fall under their own designated zones. Each zone can then have its own lock-down and welcome procedures defined, catering to its specific requirements.

Importantly, even with zone-based configurations, core functionalities like video processing and anomaly detection remain centralized. This ensures consistent security measures and simplifies overall management. For very large deployments, data storage strategies might require adjustments, such as partitioning data based on zones.

Future-proofing

Our design choices were made with a focus on future-proofing the system. We leverage industry standards (e.g., MQTT) for IoT messaging, which avoids vendor lock-in and simplifies integration with future devices. Additionally, our central hub preprocesses video feeds before sending them to Kinesis Video Streams (KVS) using the KVS producer library. This approach allows for flexibility in future upgrades, as the central hub can be modified without impacting the core functionality of video processing and storage.

We've also implemented an asynchronous, decentralized, and event-based architecture. This means different parts of the system operate independently and communicate through events. This approach offers several advantages:

  • Modular Upgrades: Individual components can be upgraded or changed without affecting other parts of the system, minimizing downtime and maintenance complexity.
  • Scalability: The architecture can easily expand to accommodate new devices and functionalities as our smart home needs evolve.
  • Improved Responsiveness: The asynchronous nature allows different components to process information independently, improving overall system responsiveness.


Conclusion

That's how we implemented the security requirements for our smart house. We hope this explanation provided valuable insights into the design considerations and trade-offs involved. In the comments below, share what security features you prioritize most in a smart home.

Stay tuned for the next article, where we'll delve into how we implemented our safety requirements, including fire management, gas and water leak management for a truly safe and smart home environment.

Alireza Kia ✔

System Architecture, Software Engineer: (Full Stack .Net Developer | C# | SQL Programming | BI Dashboard) 💠→ Network Engineer: (CCNA,CCNP,MCSE,VMWare,Linux)

9mo

#Baytec , Super smart houses, IoT, AI/ML, next-gen Smart House Dear friend, Sepehr Samiei, Good luck

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics