How to stop Azure Application Gateway and Azure Firewall

Hi folks, summer is here and my holidays are very near, so I’m wrapping everything up to close my laptop and relax for a few weeks.

But before my deserved rest, I need to give you an FinOps advice:

If you’re like me and often makes demo setups in your Azure subscription that involve resources like Azure Firewall and Application Gateways, you probably have realize that there is no easy way to gracefully shutdown all those “hungry” resources to save some money.

To stop VMs, we can simply use the Azure Portal start/stop buttons, or use automation accounts or whatever, but Azure Portal doesn’t allow you to stop application gateway or Az Firewall. In such cases, Azure PowerShell helps:

# Get Azure Application Gateway
$appgw = Get-AzApplicationGateway -Name "appgw_name" -ResourceGroupName "rg_name"
 
# Stop the Azure Application Gateway
Stop-AzApplicationGateway -ApplicationGateway $appgw
 
# Start the Azure Application Gateway
Start-AzApplicationGateway -ApplicationGateway $appgw

After executing the stop, we will be able to see that the Operational State change after 1 minute or so:

and for the AzFirewall we can use the following:

$firewall=Get-AzFirewall -ResourceGroupName rgName -Name azFw
$firewall.Deallocate()
$firewall | Set-AzFirewall

$vnet = Get-AzVirtualNetwork -ResourceGroupName rgName -Name anotherVNetName
$pip = Get-AzPublicIpAddress -ResourceGroupName rgName -Name publicIpName
$firewall.Allocate($vnet, $pip)
$firewall | Set-AzFirewall

Now, you know how to save some money using those resources and I’m able to go to holidays to rest a while

Happy holidays!

Recommendations for deploying a Jump Host in Azure

Probably you’re asking yourself what’s a jump host? So in simple words, is a virtual host which is not the same as you use daily to read e-mail, browse the web, install software, but is used to perform administrative tasks for one or multiple IT infrastructures.

These are some of the recommendations that I follow when I need to deploy a jump host in Azure. The first two, are the most important, you have to be sure of not doing any of these

  • Do NOT install any productivity tools such as Office, it’s important to keep the VM as clean as possible, it’s only a considered to be a jump Host, not a working device.
  • Do NOT use this VM for general internet browsing purposes

and other some recommendations…

  • Isolate the VM with NSG, only is need to access where it is really needed
  • Install the AntiMalware extension from Azure and configure Windows Defender Settings
  • If possible, configure JIT on the VM
  • Onboard the device in Microsoft Defender for Endpoint (if Possible)
  • Apply the Microsoft Security baseline
  • Enable Windows Defender Network Protection and Exploit Guard
  • Enable Virtualization based security, if you deployed a Gen 2 VM

That’s all, as always, these are my recommendations, probably you have different ones

First Impressions about Azure sFTP

SSH File Transfer Protocol is a very common protocol used by many customers for secured file transfer over a secure shell. Microsoft did not have a fully managed SFTP service in Azure, but now is it possible to do it with Azure Blob Storage.

So, you will be able to use an SFTP client to connect to that storage account and manage the objects inside and even specify permissions for each user.

But before beginning, you will need to register the SFTP feature in your subscription, to do that you have to type the following:

# Set the Azure context for the desired subscription
az account set --subscription "xxxx-xxxx-xxxx-xxxx"

# Check if the live tier feature is registered first
az feature show --namespace Microsoft.Storage --name AllowSFTP

# Register the live tier feature on your subscription
az feature register --namespace Microsoft.Storage --name AllowSFTP

Also, you can check that information in Preview Features option in the Azure Portal:

Once you have that, you will need to enable the hierarchical namespace in the storage account, note that you can’t enable that on an existing storage account…

BEFORE

AFTER

At the time of writing, I couldn’t create the FTPS service through the Azure Portal or event with template, when I select WestEurope as destination, in the future I’m sure that would be supported

Now, we can deploy the ARM template to the RG previously created in Azure, but, previously to do that, you have to decide if your user will connect through Password or a SSH Key. In my case, I decided to implement it with an ARM Template with SSH key, but first you need to generate an SSH key pair:

next I provide the two ARM templates for both types of implementation

Template FOR PASSWORD Implementation:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "storageAccountType": {
      "type": "string",
      "defaultValue": "Standard_LRS",
      "allowedValues": ["Standard_LRS", "Standard_ZRS"],
      "metadata": { "description": "Storage Account type" }
    },
    "location": {
      "type": "string",
      "defaultValue": "northeurope",
      "allowedValues": ["westeurope", "northcentralus", "eastus2", "eastus2euap", "centralus", "canadaeast", "canadacentral", "northeurope", "australiaeast", "switzerlandnorth", "germanywestcentral", "eastasia", "francecentral"],
      "metadata": { "description": "Region" }
    },
    "storageAccountName": {
      "type": "string",
      "metadata": { "description": "Storage Account Name" }
    },
    "userName": {
      "type": "string",
      "metadata": { "description": "Username of primary user" }
    },
    "homeDirectory": {
      "type": "string",
      "metadata": { "description": "Home directory of primary user. Should be a container." }
    }
  },
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2021-02-01",
      "name": "[parameters('storageAccountName')]",
      "location": "[parameters('location')]",
      "sku": {
        "name": "[parameters('storageAccountType')]"
      },
      "kind": "StorageV2",
      "properties": {
          "isHnsEnabled": true,
          "isSftpEnabled": true
      },
      "resources": [
        {
          "type": "blobServices/containers",
          "apiVersion": "2021-02-01",
          "name": "[concat('default/', parameters('homeDirectory'))]",
          "dependsOn": ["[parameters('storageAccountName')]"],
          "properties": {
            "publicAccess": "None"
          }
        },
        {
          "type": "localUsers",
          "apiVersion": "2021-02-01",
          "name": "[parameters('userName')]",
          "properties": {
            "permissionScopes": [
                {
                  "permissions": "rcwdl",
                  "service": "blob",
                  "resourceName": "[parameters('homeDirectory')]"
                }
            ],
            "homeDirectory": "[parameters('homeDirectory')]",
            "hasSharedKey": false
          },
          "dependsOn": ["[parameters('storageAccountName')]"]
        }
      ]
    }
  ],
  "outputs": {
    "defaultContainer": {
      "type": "string",
      "value": "[parameters('homeDirectory')]"
    },
    "user": {
      "type": "object",
      "value": "[reference(
        resourceId('Microsoft.Storage/storageAccounts/localUsers', parameters('storageAccountName'), parameters('userName'))
      )]"
    }
  }
}

Template for SSH Implementation

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "storageAccountType": {
      "type": "string",
      "defaultValue": "Standard_LRS",
      "allowedValues": ["Standard_LRS", "Standard_ZRS"],
      "metadata": { "description": "Storage Account type" }
    },
    "location": {
      "type": "string",
      "defaultValue": "northeurope",
      "allowedValues": ["westeurope", "northcentralus", "eastus2", "eastus2euap", "centralus", "canadaeast", "canadacentral", "northeurope", "australiaeast", "switzerlandnorth", "germanywestcentral", "eastasia", "francecentral"],
      "metadata": { "description": "Region" }
    },
    "storageAccountName": {
      "type": "string",
      "metadata": { "description": "Storage Account Name" }
    },
    "userName": {
      "type": "string",
      "metadata": { "description": "Username of primary user" }
    },
    "homeDirectory": {
      "type": "string",
      "metadata": { "description": "Home directory of primary user. Should be a container." }
    },
    "publicKey": {
      "type": "string",
      "metadata": { "description": "SSH Public Key for primary user." }
    }
  },
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2019-06-01",
      "name": "[parameters('storageAccountName')]",
      "location": "[parameters('location')]",
      "sku": {
        "name": "[parameters('storageAccountType')]"
      },
      "kind": "StorageV2",
      "properties": {
          "isHnsEnabled": true,
          "isLocalUserEnabled": true,
          "isSftpEnabled": true
      },
      "resources": [
        {
          "type": "blobServices/containers",
          "apiVersion": "2019-06-01",
          "name": "[concat('default/', parameters('homeDirectory'))]",
          "dependsOn": ["[parameters('storageAccountName')]"],
          "properties": {
            "publicAccess": "None"
          }
        },
        {
          "type": "localUsers",
          "apiVersion": "2019-06-01",
          "name": "[parameters('userName')]",
          "properties": {
            "permissionScopes": [
                {
                  "permissions": "rcwdl",
                  "service": "blob",
                  "resourceName": "[parameters('homeDirectory')]"
                }
            ],
            "homeDirectory": "[parameters('homeDirectory')]",
            "sshAuthorizedKeys": [
              {
                "description": "localuser public key",
                "key": "[parameters('publicKey')]"
              }
            ],
            "hasSharedKey": false
          },
          "dependsOn": ["[parameters('storageAccountName')]"]
        }
      ]
    }
  ],
  "outputs": {
    "defaultContainer": {
      "type": "string",
      "value": "[parameters('homeDirectory')]"
    },
    "user": {
      "type": "object",
      "value": "[reference(
        resourceId('Microsoft.Storage/storageAccounts/localUsers', parameters('storageAccountName'), parameters('userName'))
      )]"
    },

    "keys": {
      "type": "object",
      "value": "[listKeys(resourceId('Microsoft.Storage/storageAccounts/localUsers', parameters('storageAccountName'), parameters('userName')), '2019-06-01')]"
    }
  }
}

Once you have deployed the template, you can go to the portal to configure the user permission:

Remember to keep the password, without that you can’t be able to connect to the SFTP

And now, you can connect to the SFTP via PS or other preferred tool

And play with some of the files:

We can check the blob itself to review the information about the recent uploads:

As you have seen, now you’re able to deploy your SFTP for Azure Blob Storage without worrying about Container Solutions or other weird experiments.

Till next time!

Messing around with AVD and AADJoin

In a previous post: Messing around with WVD, AADDS and FSLogix – Albandrod’s Memory (albandrodsmemory.com) I was talking about how AVD breaks some scenarios and how we could fix them.

In this ocassion I will talk about my experience working with the new version of AADJoin for AVD which is finally in public preview. So with this approach we can eliminate the need to have a domain controller or AADDS in place for your AVD deployment to work, but as you can imagine it has some drawbacks.

First important thing that you have to be aware of implementing this type of scenario is that when you’re adding the VMs to the HP, it is necessary to select the following option:

Also is important to check wether is we want to join the VMs to Intune or not, in my case I selected yes, and after a few moments of the VM creation, I was able to see it in the endpoint portal:

After you have created the HP, my recommendation would to configure it, you can use the following advanced RDP properties:

use multimon:i:0 which basically Determines whether the session should use true multiple monitor support when connecting to the remote computer

To access Azure AD-joined VMs using the web, Android, macOS, iOS, and Microsoft Store clients, you must add targetisaadjoined:i:1 to the HP. These connections are restricted to entering user name and password credentials when signing in to the session host.

But, what is more important for me, and it was driving me crazy at first, it was the authantication in AVD AADJoined:

The following configurations are currently supported with Azure AD-joined VMs:

  • Personal desktops with local user profiles.
  • Pooled desktops used as a jump box. In this configuration, users first access the Azure Virtual Desktop VM before connecting to a different PC on the network. Users should not save data on the VM.
  • Pooled desktops or apps where users don’t need to save data on the VM. For example, for applications that save data online or connect to a remote database.

So, don’t break your head trying to authenticate with your current user as in WVD Joined Domain, you will need to use a Local profile for AzureAD Joined VMs, if not you will receive an error like the following which will drive you nuts:

But after using the local user in the VM you will be able to log in the VM.

Once you log in to the VM, you can check the dsregcmd to see the status:

And also how the machine is enrolled in Intune, you can check the information regarding the enterprise registration 🙂

For me AVD AADJoin, it is a pseudo Windows365 but with custom images and without paying the full license to access to the resource itself. The other things about AVD and AADJoin are pretty the same as Domain Joined, so have fun with them

Till next time!

Swap OS disk to storage account

Quick post to remember what actions have to be made to swap your OS disk to a VHD disk in a storage account (yes swapping from MD to UMD, I know probably I’m crazy, but for golden images it is great).

But imagine that you have a VM running a MD disk and you need to swap that OS Disk with and UMD… how can you that?

# Get the VM 
$vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myVM 

# Make sure the VM is stopped\deallocated
Stop-AzVM -ResourceGroupName myResourceGroup -Name $vm.Name -Force

# Set the VM configuration to point to the new disk
Set-AzVMOSDisk -VM $VirtualMachine -Name "osDisk.vhd" -VhdUri "https://mystorageaccount.blob.core.windows.net/disks/osdisk.vhd"

# Update the VM with the new OS disk
Update-AzVM -ResourceGroupName myResourceGroup -VM $vm 

# Start the VM
Start-AzVM -Name $vm.Name -ResourceGroupName myResourceGroup

That’s all! Your VM is running with vhd disk 🙂

Log Analytics Best Practices

Hi! You probably know that I am a fan of Log Analytics, so with this post I want to share with you what are my thoughts about best practices while designing and setup of Log Analytics in several deployments, let’s roll!

  • Use as few workspaces as possible: At the beggining I was using several workspaces (each one for subscription), but in the practice it is more useful to only have one. (The only thing to have separate workspaes would be money and retention). and if you want to control cost, use the table level retention feature!
  • For Long term retention move data to Storage Account 🙂
  • Use one WS for each region: depending in where are you working and laws, would be advisable to have different WS across region (EMEA, APAC, EEUU…)
  • Use Azure Policies to install the Monitoring Agents 🙂 it is very useful
  • Define proper RBAC: depending in which information you are ingesting to Log Analytics, will be important to some people have access to certain data.
  • Setup Alerting for events: Yes you are collecting a huge amount of data, but… are you creating alerts and monitoring rules for those important services?
  • Control the cost: It is easy to set up Log Analytics, but to put verbose data for all those services it is also easy, so your main goal would be to tweak the source of the data and the amount of information that you’re ingesting to log analytics

And finally, the last piece of information… keep an eye to the Log Analytics roadmap, to be updated is my daily nightmare, so… be patient with this

till next time!

Forced Tunneling in Azure

I am not an expert on networking, but sometimes while working in Azure, I have to face some different configurations in order to fulfill customer requirements.

In this case, my customer wanted to redirect all the Internet traffic on the VMs from Azure to OnPrem. Because If you don’t configure forced tunneling, Internet-bound traffic from your VMs in Azure always traverses from the Azure network infrastructure directly out to the Internet, without the option to allow you to inspect or audit the traffic. 

And you know… nowadays, unauthorized Internet access can potentially lead to security breaches…

So, to do that I was thinking in the way I used to do those kind of things, create a table route, redirect the 0.0.0.0/0 traffic to a NVA and done, but this case was not the same, because i needed to redirect all the traffic.

So in this case what is needed is the Forced Tunneling:

Diagrama que muestra la tunelización forzada.

With that configuration any connectio from midtier and backed it is redirected back to onpremises via VPN S2S, and then the traffic can be inspected or event restricted.

The magic to achieve that scenario is to use PowerShell, there is no option to do that with the UI, you can check the full procedure in the following link: Configure forced tunneling for Site-to-Site connections – Azure VPN Gateway | Microsoft Docs

But make special attention to the following parameter:

$LocalGateway = Get-AzLocalNetworkGateway -Name "DefaultSiteHQ" -ResourceGroupName "ForcedTunneling"
$VirtualGateway = Get-AzVirtualNetworkGateway -Name "Gateway1" -ResourceGroupName "ForcedTunneling"
Set-AzVirtualNetworkGatewayDefaultSite -GatewayDefaultSite $LocalGateway -VirtualNetworkGateway $VirtualGateway

With that you create the magic in networking, the other steps that are done in the article are more or less the same that I usually did in my implementations.

So take into account that and happy routing!

How to “backup” resources in Azure

Have you ever wonder how to backup a resource in Azure in order to rebuild it in case it is accidentally deleted? Or imagine that you want to reconfigure it in other subscription/customer.

With Powershell we can do that, it is pretty straightforward and powerful. In my case, what I want to backup is an Azure Firewall, because for example I have configured the AZFirewall with multiple rules, and I want to reuse them in another subscriptions.

To do that, what we have to is to get the firewall resource ID

$AzFirewallId = (Get-AzFirewall -Name "MyFirewallName" -ResourceGroupName "MyRgName").id

Then configure a file to export the configuration:

$FileName = ".\MyResourceBackup.json"

and finally… export the JSON to the previous file:

Export-AzResourceGroup -ResourceGroupName "MyRgName" -Resource $AzFirewallId -SkipAllParameterization -Path $FileName

Take note that I have used the “SkipAllParameterization” parameter, it allows to recreate exaxtly what we have “backup” in the JSON. In case that we want to change the names to the resources we can avoid that parameter. Also it is important, that the json contains all the configurations that we have done in the service, so, we are not losing anything.

And now… how to restore them in Azure? Pretty simple as well:

New-AzResourceGroupDeployment -name "RestoreJob" -ResourceGroupName "MyRgName" -TemplateFile ".\MyResourceBackup.json"

That’s all for one resource, and what if I whant to “backup” all the resources contained in a Resource Group? You can do it as well! But in this case, you must change some of the parameters to export the file:

Export-AzResourceGroup -ResourceGroupName "MyRgName" -SkipAllParameterization -Path $FileName

You will avoid what is in the resource itself, but you will have a laaaarge JSON, with all the parameters and configurations. Then you can restore the resources with the same method explained earlier.

Also, a good point would be to configure an automation account to export all the configuration files from the portal and store them in a blob in order to have a copy of all the resources in the subscription.

Conditional Access extension for Chrome

If you’re implementing conditional access in your company and you’re struggling with Windows 10 devices and Chrome support, probably you will need to visit that Docs link: https://docs.microsoft.com/es-es/azure/active-directory/conditional-access/concept-conditional-access-conditions#chrome-support

But in this post, I want to talk about something related to it, in one of my projects, I have a CA policy that required one of the following selected controls: Require MFA or Require Hybrid AAD joined device

My device was Hybrid, so I was fullfilling one of the requirements, for example, when I was accessing with IE or Edge, the device info gets passed properly and MFA is bypassed for hybrid AAD machines.

But with Chrome, even having the Windows 10 Account extension pushed via GPO, I was able to see in the azure sign-in logs that device info is blank except for Browser and OS, so the AAD join status is not passed and MFA triggers. So it was very weird and it was causing me some problems…

So finally, after hours of troubleshooting, i finally figured out what was wrong. When you automatically install the extension, it doesn’t clear some cookies which Chrome will then try to use the old way of logging in. So in this case what you will need to do is access to chrome://settings/content/all and delete the cookies for login.microsoftonline.com

After doing that, everything was working perfectly, keep aware of that!!