Azure IoT Edge Image Processing–3–Add the Image Processor Module

In the previous posts, we created a new IoT project and created a module to load the customvision container. Now, in this project, we will be adding a new module to the project. This module will run the main method for our solution. This module will integrate with the Unify camera, capture the image, pass that image to the image classifier, then based on the outcome, publish MQTT messages on a topic.

Create a new module as below.

clip_image002

Select the existing deployment file.

clip_image003

Select a Python module

clip_image005

Call the module AlfrescoImageProcessor

clip_image007

Provide the docker registry information

clip_image008

Press enter.

Once the module is created, you will notice that there are two folders under the modules/ folder.

clip_image009

Open the main.py under the modules/alfrescoImageProcessor

Replace the code with the following.

import time

import sys

import os

import requests

import json

import shutil

from azure.iot.device import IoTHubModuleClient, Message

import paho.mqtt.client as mqtt

# global counters

SENT_IMAGES = 0

# global client

CLIENT = None

# Send a message to IoT Hub

# Route output1 to $upstream in deployment.template.json

def send_to_hub(strMessage):

    message = Message(bytearray(strMessage, ‘utf8’))

    CLIENT.send_message_to_output(message, "output1")

global SENT_IMAGES

    SENT_IMAGES += 1

print( "Total images sent: {}".format(SENT_IMAGES) )

# Find Probability in the JSON values.

def findprobablity (attributeName,jsonObject):

for entry in jsonObject["predictions"]:

if attributeName == entry [‘tagName’]:

return entry [‘probability’]

else:

return 0

# Send an image to the image classifying server

# Return the JSON response from the server with the prediction result

def sendFrameForProcessing(imagePath, imageProcessingEndpoint):

# Process Image

    headers = {‘Content-Type’: ‘application/octet-stream’}

with open(imagePath, mode="rb") as test_image:

try:

            response = requests.post(imageProcessingEndpoint, headers = headers, data = test_image)

print("Response from classification service: (" + str(response.status_code) + ") " + json.dumps(response.json()) + "\n")

except Exception as e:

print(e)

print("Response from classification service: (" + str(response.status_code))

return json.dumps(response.json())

def main(imagePath, imageProcessingEndpoint):

try:

print ( "Simulated camera module for Azure IoT Edge. Press Ctrl-C to exit." )

try:

global CLIENT

            CLIENT = IoTHubModuleClient.create_from_edge_environment()

except Exception as iothub_error:

print ( "Unexpected error {} from IoTHub".format(iothub_error) )

return

print ( "The sample is now sending images for processing and will indefinitely.")

while True:

# Get image from HomeAssistant

            url = CAMERA_CAPTURE_URL

            response = requests.get(url, stream=True)

with open(imagePath, ‘wb’) as out_file:

                shutil.copyfileobj(response.raw, out_file)

del response

# Process Image

            classification = sendFrameForProcessing(imagePath, imageProcessingEndpoint)

# find Active Probablity

            probability = findprobablity("active", json.loads(classification))

# update MQTT sensor

            client = mqtt.Client()

            client.username_pw_set(MQTTUSER, MQTTPASSWORD)

            client.connect(MQTTBROKER,1883,60)

if float(probability) > float(PROBABILITY_THRESHOLD):

                client.publish("home/alfresco/image_processing_sensor/state", "on")

else:

                client.publish("home/alfresco/image_processing_sensor/state", "off")

            client.disconnect()

# send to IoT Hub

            send_to_hub(classification)

            time.sleep(15)

except KeyboardInterrupt:

print ( "IoT Edge module sample stopped" )

if __name__ == ‘__main__’:

try:

# Retrieve the image location and image classifying server endpoint from container environment

        IMAGE_PATH = os.getenv(‘IMAGE_PATH’, "")

        IMAGE_PROCESSING_ENDPOINT = os.getenv(‘IMAGE_PROCESSING_ENDPOINT’, "")

        PROBABILITY_THRESHOLD = os.getenv(‘PROBABILITY_THRESHOLD’, "")

        CAMERA_CAPTURE_URL = os.getenv(‘CAMERA_CAPTURE_URL’, "")

        MQTTBROKER = os.getenv(‘MQTTBROKER’, "")

        MQTTUSER = os.getenv(‘MQTTUSER’, "")

        MQTTPASSWORD = os.getenv(‘MQTTPASSWORD’, "")

except ValueError as error:

print ( error )

        sys.exit(1)

if ((IMAGE_PATH and IMAGE_PROCESSING_ENDPOINT) != ""):

        main(IMAGE_PATH, IMAGE_PROCESSING_ENDPOINT)

else:

print ( "Error: Image path or image-processing endpoint missing" )

Now open the .env file and add some more credentials required for the second module.

image

Save the document.

Open the deployment.template.json

Navigate to the branch > modulesContent>$edgeAgent>properties.desired>modules

Remove the module SimulatedTemperatureSensor

image

And update the createOptions in AlfrescoImageProcessor as below

"createOptions": "{\"Env\":[\"IMAGE_PATH=alfresco_image1.jpg\",\"IMAGE_PROCESSING_ENDPOINT=http://alfrescoClassifier/image\",\"PROBABILITY_THRESHOLD=0.6\",\"CAMERA_CAPTURE_URL=$camera_capture_url\",\"MQTTBROKER=$mqttbroker\",\"MQTTUSER=$mqttuser\",\"MQTTPASSWORD=$mqttpassword\"]}"

That’s it. Now it’s time to build and deploy the modules.

image

The full source of this solution is available in github in the following location.

https://github.com/sameeraman/AlfrescoVisionwithAzureIoTEdge

Azure IoT Edge Image Processing–2–Create IoT Project and Load the Trained Module

In the previous post, we looked at how to custom train a vision module and download it. In this post we will look at creating python module that will use the downloaded container to run on IoT edge.

First open Visual Studio code and open the commands window by pressing Ctrl + Shift + P. Search for Azure IoT Edge. Select “Azure IoT Edge: New IoT Edge Solution”

clip_image001

Note that I have the following Extensions installed in my VSCode.

  • Azure IoT Edge
  • Azure IoT Hub
  • Azure IoT Tools
  • Python

Then select the folder for the project to be created.

Give the solution a name.

clip_image002

Select the module template, I’ll be writing Python modules, therefore, I selected the Python Module.

clip_image003

Enter a Module name.

clip_image004

Enter the container repository name and the module name.

clip_image006

You will see the following files are generated.

clip_image007

Open the .env file and enter the container registry credentials.

clip_image009

On the bottom left side, select the correct target architecture for the project.

clip_image010

clip_image012

In this case the correct Architecture would be ARM32v7 as I’m planning to run this in a Raspberry Pi 4.

Copy the downloaded module files (from the previous post) to the project module folder. ~AlfrescoVisionEdgeSolution\modules\alfrescoClassifier

clip_image013

Following are the new files copied to the folder.

clip_image014

Now open the module.json file in the alfrescoClassifier module and update the docker image files for the ARM32v7 architecture as below. This will tell the module definition to use the docker file that we copied which is the one we custom trained.

clip_image016

Save and close the file.

In this post we created a new IoT Edge project in VScode and loaded the module we trained in to the project. In the next post we will create another module in the same project to get the integration happening.

Azure IoT Edge Image Processing– 1 -Training the CustomVision Module

In this blog post, we will be looking at how I did the custom vision module training. This is the first part of a series of blog posts, where I talk about how I used a custom vision trained module to help my home automation project. For the full story, please read the project summary blog.

For the training of the module I used the Microsoft provided Custom Vision site. https://www.customvision.ai . This site allows you to upload your own images and train modules. Once completed, it allows you to download the train module as docker containers.

Navigate to the URL and sign in with your account.

In the project section click on new project.

clip_image001

Create a new project with the following details.

clip_image002

Now it’s time to add the images.

clip_image003

Click on the add image on the left top.

In this case, I had images pre organized in my alfresco photos. Basically, I had took a lot of images out from my recordings when the area is clear and when the area is active. It took around 50 photos in each category. I put them in two folders to make it easier for me to upload.

clip_image005

clip_image007

Open one folder at a time and upload all images

clip_image008

Do the same for the idle photos. Tag them as negative.

clip_image009

Now we have uploaded the images to the custom vision tool. Lets go and train the module.

clip_image010

At the top right click on the train button.

If you are doing testing, you can do a quick training. But if you are thinking of using the module in the long run, it’s best to do a advance training.

clip_image011

In this case I’m doing a Quick train.

Once the test results are complete, you will see the below results.

clip_image013

Then click on the export button at the top.

Select docker file.

clip_image014

Select the target architecture. In my case, it’s ARM.

clip_image015

Export and download the module.

You will use this module in the next posts.

Azure IoT Edge Image Processing for Home Automation

In this post, I’ll be describing you how I did Image processing at the edge to support my home automation project. In this exercise, I used Azure IoT Edge to run image processing at the edge to detect if there is any presence in my alfresco. The reason I wanted to do this is, motion sensors goes idle when someone is sitting at the alfresco. I have rules configured in my home automation to play music in the alfresco when there is motion. When there is no motion it will stop playing music. It’s quite common for someone to sit down, chill in the alfresco and this logic doesn’t work well when people are sitting without any motion in the alfresco. I want the music to continue playing when there is some presence, even without motion. Image processing helps me to achieve this.

I used image processing with object detection modules in Home assistant. It worked “OK”, however, the accuracy of module wasn’t that great. Therefore, I recently played with the Azure IoT Edge with a custom trained vision module. The beauty of this is, you train the module saying the state of the image. Once you train the module enough, Azure Machine Learning is playing an amazing role predicting the future presented images. I put this method in to test and realised that the accuracy went over the roof. So, I thought of sharing the story so that if you can try it your self as well.

Note that, you will need the following if you want to try this out on your own.

  • Azure Subscription
  • Home Assistant Setup
  • A camera for Image feed.
  • Raspberry Pi 4 or Similar

I’m planning to write detail blog post for each item configuration. This post will be covering a high-level architecture overview and the results.

Following is the architecture I went through.

clip_image002

I had developer machine which is running on a server at my home. This is where I did my development. Because it needed a bunch of development tools, I decided not to use my laptop for the development.

First, I created the custom vision module using https://www.customvision.ai/. I collected a lot of images from my recordings and uploaded them. Then I tagged them accordingly.

I downloaded the classification module. In the model, I selected a general single tag module. This way the decision making is much easier for the module and the accuracy is much higher.

I used this module and I developed my IoT edge Modules. IoT edge modules talk to the video camera to get an image, then feed that into the classifier module, get the results, evaluate it and update the home assistant sensor accordingly. All module communication and the camera communication happen over HTTP. The sensor updates to the Home Assistant occur over the MQTT.

Following are some test results shared with you. On the left top, I have the Home Assistant Sensors indicated in my floor plan. The icon looking like an “eye” is the image processing sensor. If it detects presence, it will turn yellow and if it’s undetected it will stay grey. On the right top it’s the camera video live feed of the alfresco. As you can see, there is no presence right now and the sensor is grey. There’s no motion detected in the alfresco and hence, there’s no music.

clip_image004

At the bottom left, you can see results output from the classifier. You can see there’s only a 0.1 probability on “Active” tag. Active Tag is same as Presence detection in Alfresco. I just gave it active at the time of training my module. At the bottom right is a timer for you to understand the timing.

In the following image, I started walking in to the alfresco, sat down and chilled. On the top right camera view, you can see me sitting. On the left top, you can see that the motion sensor is active as I just walked in. You can also see that the IoT edge image processing module has already detected presence in the alfresco. You can see that the probability has gone to 0.99 and hence the sensor is triggered active.

clip_image006

After another 1.5 minutes, you can see in the following picture that the motion sensor in the alfresco has gone inactive. This is because I’m sitting, chilling and there is no motion. Ideally, this would have turned the living music. Now because I have image processing sensor, I have a condition for that automation to, not to execute it when the image processing sensor is active. Therefore, the whole system is working as I expected.

clip_image008

Now I have achieved what I want, I walked out of the alfresco. As you can see in the below image, Image processing IoT edge module has already detected that I’m not there. The motion sensor is still active as I have just walked out. It will time out the motion soon and the music will stop this time as the image processing sensor is inactive. You can see at the bottom left that the probability is gone down back again to 0.19.

clip_image010

I had probability 0.60 as the cut-off mark to detect presence.

As you can see above, it’s an interesting small project. This blog post explains the full project in a post. I’ll explain how each component was done in detail in future posts. IoT edge is a powerful service that can do powerful things in real world.

Azure Bastion Service–End User Experience

In this blog post we will be discussing the end user experience for the Azure Bastion Service.

In the previous post, we provisioned an Azure Bastion Service in our VNET. The environment looks as below now.

vnet

The key thing in this architecture is, all inbound traffic to the network is HTTPs. Using RDP over the internet is not secure. By eliminating RDP in the external network secure the way end users connect to the server.

The Bastion resource group looks as below.

clip_image001

There are two resources in the resource group. Once resource for Bastion and a public IP.

Now let’s have a look at how we can connect to a VM in the network.

Click on a VM in the same VNET that Azure Bastion Service was provisioned.

clip_image001[7]

Then on the right hand side, select Bastion.

clip_image001[9]

Then enter the local admin username and the password.

You might need to allow pop  up s for Azure Portal and Bastion here. Once allowed you will see a new popup window open with Bastion.

image

Logging in to the server experience is really easy. At the moment it’s limited to the Azure portal. But Microsoft has mentioned that they will provide the direct RDP via Bastion using the native RDP client.

Azure Bastion Service – Provisioning

In this blog post I’m going to be discussing the Azure Bastion Service.

What is a Azure Bastion Service ?

Azure Bastion Service provides a secure way to remote in to your Azure VMs. It provisions a Azure Bastion host in the customers VNET, which provide secure seamless RDP and SSH access to your virtual machines in Azure without opening a single port in to the public internet. It’s a key new way to protect your VM’s in the cloud.

Creating jump boxes or exposing servers into the internet is regarded as a worst thing to do in the cloud. Threat landscape has grown tremendously as never before these days and we need to take all precautions to keep our workload secure as possible. Therefore, we need to limit the ports we expose and do it properly.

Azure bastion is a fully managed service that does the ground work to enable remote access to the VMs in a much more secure way. Under the bonnet, It’s a VM scale set that can scale up and down based on the number of sessions.

Let’s go and have a look at how the provisioning experience for the Azure Bastion Service.

Service Provisioning Experience

First you will need to create a subnet in your VNET to place the Bastion Host. It will be fully managed subnet like the App Gateway Subnet. The subnet requires a /27 size minimum. Azure Bastion service will place a Bastion scale set in this subnet and Azure will manage it.

My current VNET looks as below.

 vnet before

Therefore,  I will need to a new subnet “AzureBastionSubnet” to this VNET.

clip_image001[6]

This subnet is a pre-requisite to create a Azure Bastion Resource. Now let’s go ahead and create a Azure Bastion Resource.

Login to the Azure Portal and select create a resource.

Then type “Azure Bastion”

clip_image001

After that populate the required details. Create a resource group, give the resource a name, select the subnet, and provide a name for the Public IP.

clip_image001[8]

Then click next and finish to provision the resource.

clip_image001[10]

Once Provisioned, you will see two resource created. Bastion resource and the public IP.

clip_image001[12]

Azure Bastion Service creation process is very simple as you can see.

Next we will look at how to use the Bastion Service and the end user experience.

Visual Studio Online – Hands-on first look

In this blog post I’m going to be walking you through the hand-on experience in creating a Visual Studio online instance. This week at Ignite 2019 Microsoft announced this service which is a game changer for developers. With this, developers will be able to quickly spin up a fully configured development environments and work on the web-based VS code editor to be more productive than ever before. The developers will be able to access the development environment from anywhere in the world with just over the browser and start work on it. Development has become more collaborative and open source now, therefore, developers will need to switch between the codebases and projects more quickly without losing productivity.

Ok, lets go and create an instance of Visual Studio online and see the experience.

To create an instance of Visual Studio online you will need to have an active Azure subscription. Then you will need to go to the following URL.

https://online.visualstudio.com/

image

Then click on get started.

In your first visit, it will ask you for a consent for the Visual Studio Service client to access your profile.

image

When you first land on the Visual studio online environment there won’t be any environments or plans created.

image

First create a billing plan. When you click on create environment, It will automatically create a billing plan if you haven’t got one. Billing plan tells you, where the underlying hosting costs to be charged to. It will be to a Azure subscription that you have access to. Select a subscription, location, RG name, and a service name.

image

Once that is created, you will be able to create an environment. In the environment, you will enter an environment name, a git repo name (if you have one), instance type and suspend idle time out.

image

Once that is created, you will your instance as below.

image

When you click on the environment, you will enter it with the VS code launch.

image

As a developer, you can do a git clone and start work on this.

Now let’s go and have a look at how the backend looks like. It creates visual studio online plan resource for billing purposes. That’s where the cost is incurred in to the subscription.

image

Azure Private Link vs Azure Service Endpoints

In the post, I’m going to be discussing the differences between the new service Azure Private Link and the Azure Service endpoints. Both serve a similar uses case, which is around controlling access to the Azure Platform as a Service services. However, they are totally different and let’s drill down to go into the details around the differences.

If you compare them side by side, the following is what you will see in high level. The main difference between the two is – Service endpoint uses the public IP address of the PaaS Service when accessing the service. Private Link introduces a private IP for a given instance of the PaaS Service and the service is accessed via the private IP. With the architecture, there are additional features that come with Private Link that can’t be achieved via the Service Endpoints. With the private link, you can restrict access per instance whereas with private endpoints you don’t get that capability.

clip_image001

Further to those, the following is a comparison table that I have put together.

Azure Private Link

Azure Service Endpoints

Control Access to PaaS Services over Private Network

Control Access to PaaS Services over the public internet.

VNET to PaaS instance via Microsoft backbone

VNET to PaaS service via the Microsoft backbone

PaaS resource mapped to a private IP address. NSGs are restricted to Vnet space

The destination is still a public IP address, NSG needs to be opened, service tags can help.

In-built data exfiltration protection

Traffic will need to be passed through an NVA/Firewall for exfiltration protection

Easily extensible for On-prem network traffic via ExpressRoute or VPN.

Restricting On-prem traffic is not straight forward

Azure Private Link in Action–Testing

In the previous posts (post 1, post 2) we discussed on the setup of the environment to test the Azure Private Link. Now it’s time to test.

First Lets lock down the public Access. Remove all the public IP addresses from the SQL Firewall.

image

image

Login to the VM1 and check the DNS.

image

You can see that it’s resolving to the private IP address.

image

You can see that the client is connected to the DB successfully.

image

Jump into VM 2 in the Peered VNET and do the same tests. See the results below.

image

image

Jump into the on-prem server.

This is tricky because this is using the local DNS, not azure DNS. Therefore, we need to create the following host record in the hosts file.

clip_image001

Once that is created. You will need to connect using that DNS name.

As you can see below, private link connection works and the public connection doesn’t work.

image

The public connection asks me to white list the public IP address.

image

To summarize, I have drawn the following diagram. It shows how new connections work.

image

As you can see, all connections to the SQL PaaS Service is not private. All public access is denied. This is valuable to secure the applications and meet the company compliance requirements.

Azure Private Link in Action–Private DNS Zone Setup

In the previous post we looked at how to create the Azure Private Link. In order to work with the Azure Private Link, you will need to setup the Azure Private DNS zone. This post walk you through the Azure Private DNS zone setup.

Now let’s go and create the Private DNS Zone in the Virtual Network. This will be required to direct the virtual machines in the VNet to the private link IP address rather than the public IP address.

Navigate to the Azure Portal and search for service “Private DNS Zones”

image

Click Add..

image

Select an appropriate RG. In my case, I’m going to place that in the Networking resource group.

For the instance name: type “privatelink.database.windows.net”

image

Review and create

image

Once successfully created. Navigate to the resource.

Then select virtual network links. Click Add.

Add the Link to the Virtual Network.

image

image

Once it’s created, Add the second peered network in the same way

image

Once that’s done, add a A record to point to the Private Link IP.

Go to overview and click + Record Set.

image

Enter the following details. The name should be SQL Server name.

The IP address will be the private link IP.

image

This concludes the Private DNS Zone configuration. In the next blog posts we will carry out the testing on the private link and the private DNS.