How to stop Azure Application Gateway and Azure Firewall

Hi folks, summer is here and my holidays are very near, so I’m wrapping everything up to close my laptop and relax for a few weeks.

But before my deserved rest, I need to give you an FinOps advice:

If you’re like me and often makes demo setups in your Azure subscription that involve resources like Azure Firewall and Application Gateways, you probably have realize that there is no easy way to gracefully shutdown all those “hungry” resources to save some money.

To stop VMs, we can simply use the Azure Portal start/stop buttons, or use automation accounts or whatever, but Azure Portal doesn’t allow you to stop application gateway or Az Firewall. In such cases, Azure PowerShell helps:

# Get Azure Application Gateway
$appgw = Get-AzApplicationGateway -Name "appgw_name" -ResourceGroupName "rg_name"
 
# Stop the Azure Application Gateway
Stop-AzApplicationGateway -ApplicationGateway $appgw
 
# Start the Azure Application Gateway
Start-AzApplicationGateway -ApplicationGateway $appgw

After executing the stop, we will be able to see that the Operational State change after 1 minute or so:

and for the AzFirewall we can use the following:

$firewall=Get-AzFirewall -ResourceGroupName rgName -Name azFw
$firewall.Deallocate()
$firewall | Set-AzFirewall

$vnet = Get-AzVirtualNetwork -ResourceGroupName rgName -Name anotherVNetName
$pip = Get-AzPublicIpAddress -ResourceGroupName rgName -Name publicIpName
$firewall.Allocate($vnet, $pip)
$firewall | Set-AzFirewall

Now, you know how to save some money using those resources and I’m able to go to holidays to rest a while

Happy holidays!

Recommendations for deploying a Jump Host in Azure

Probably you’re asking yourself what’s a jump host? So in simple words, is a virtual host which is not the same as you use daily to read e-mail, browse the web, install software, but is used to perform administrative tasks for one or multiple IT infrastructures.

These are some of the recommendations that I follow when I need to deploy a jump host in Azure. The first two, are the most important, you have to be sure of not doing any of these

  • Do NOT install any productivity tools such as Office, it’s important to keep the VM as clean as possible, it’s only a considered to be a jump Host, not a working device.
  • Do NOT use this VM for general internet browsing purposes

and other some recommendations…

  • Isolate the VM with NSG, only is need to access where it is really needed
  • Install the AntiMalware extension from Azure and configure Windows Defender Settings
  • If possible, configure JIT on the VM
  • Onboard the device in Microsoft Defender for Endpoint (if Possible)
  • Apply the Microsoft Security baseline
  • Enable Windows Defender Network Protection and Exploit Guard
  • Enable Virtualization based security, if you deployed a Gen 2 VM

That’s all, as always, these are my recommendations, probably you have different ones

My password recommendations from the trenches

The following are recommendations and thoughts that I extracted by working with several customers, maybe you will find it obvious, but for other people could be useful. So, let’s begin:

In the identity plane, we could say that exists 2 categories:

  • Resist Common attacks
  • Contain successful attacks

I don’t want to enter of how to resist or contain attacks, because probably I covered some of these topics in other blog entries, but for me, there is another category which is: understand the human nature.

Nothing more that understand that almost every rule that we impose to the end users, result in degradation of security. Why? Because we force users to use long passwords, with special characters, and in the end, users tend to reuse passwords which makes easier to guess or crack passwords for malicious actors.

So, in the post I will resume some of my experiences as AntiPatterns and recommendations:

  • Antipattern – Requiring long passwords: excessive length passwords (more than 10 characters) can result in a behaviour predictable, users tend to choose repeating patterns (heyholetsgoheyholetsgo) that meet the character length but clearly not hard to guess. We can say that this kind of passwords are hard to guess but lead to poor behaviours to guess the password.
    • SuperPRO Tip: You can use a long password, but in this case what I recommend is something that engineers from Microsoft do. They use a very loooooooong password, they forget it, and instead of it, they use passwordless mechanisms such as Windows Hello to sign in.

My tip: Use minimum 8 length requirement but ban common passwords with Azure AD Password Protection.

  • Antipattern – Require use of multiple character sets: probably you’re not in the same line as me, but I’ve seen that this rule do more harm than good. People use patterns as substitutions such  as $ for s, @ for a, 1 for I. So keep it in mind
  • Antipattern – Password expiration: Policy expiration drive users to use very predictable password (for example, the next password can be predicted on the previous password), end users do not tend to use a new password, the tend to update the old one.

My tip for the two previous points: Azure AD Password Protection + Conditional Access based on User Identity

  • Recommendation – Ban common passwords: For me, the most important restriction is to ban the use of common password to reduce the possibility of brute force or password spray attacks

Tip: Look at my first tip 😊

  • Recommendation – Educate end Users not to use organization credentials anywhere else: Yes I know that educate users are difficult, but you have to do it, because the tend to reuse the same password across multiple sites. It is a common practice for cyber criminals to try compromised credentials across many sites.
  • Recommendation – Enforce MFA registration and enable MFA: ensure that users maintain their security information up to date, so they can respond to security challenges if needed. Doing this, I have seen that end users are more implicated concerning digital security

Enabling MFA prevents up to 99.9% of identity attacks, and if we use other controls such as user location, the better.

PRO TIP: Use Conditional access with FIDO2 security key (PassWordless Authentication with Fido 2 Keys – Albandrod’s Memory (albandrodsmemory.com))

EndUser TIP: Consider turning on two-step verification everywhere you can

  • Recommendation – Enable risk-based Authentication: when the system detects suspicious activity, it challenges the user to ensure that they are the legitimate account owner. Personally, I think that this feature is great, but the only drawback that it is only included with AAD P2

Probably you will have different ones based on your experience but these are my recommendations. Till next time and stay safe!

First Impressions about Azure sFTP

SSH File Transfer Protocol is a very common protocol used by many customers for secured file transfer over a secure shell. Microsoft did not have a fully managed SFTP service in Azure, but now is it possible to do it with Azure Blob Storage.

So, you will be able to use an SFTP client to connect to that storage account and manage the objects inside and even specify permissions for each user.

But before beginning, you will need to register the SFTP feature in your subscription, to do that you have to type the following:

# Set the Azure context for the desired subscription
az account set --subscription "xxxx-xxxx-xxxx-xxxx"

# Check if the live tier feature is registered first
az feature show --namespace Microsoft.Storage --name AllowSFTP

# Register the live tier feature on your subscription
az feature register --namespace Microsoft.Storage --name AllowSFTP

Also, you can check that information in Preview Features option in the Azure Portal:

Once you have that, you will need to enable the hierarchical namespace in the storage account, note that you can’t enable that on an existing storage account…

BEFORE

AFTER

At the time of writing, I couldn’t create the FTPS service through the Azure Portal or event with template, when I select WestEurope as destination, in the future I’m sure that would be supported

Now, we can deploy the ARM template to the RG previously created in Azure, but, previously to do that, you have to decide if your user will connect through Password or a SSH Key. In my case, I decided to implement it with an ARM Template with SSH key, but first you need to generate an SSH key pair:

next I provide the two ARM templates for both types of implementation

Template FOR PASSWORD Implementation:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "storageAccountType": {
      "type": "string",
      "defaultValue": "Standard_LRS",
      "allowedValues": ["Standard_LRS", "Standard_ZRS"],
      "metadata": { "description": "Storage Account type" }
    },
    "location": {
      "type": "string",
      "defaultValue": "northeurope",
      "allowedValues": ["westeurope", "northcentralus", "eastus2", "eastus2euap", "centralus", "canadaeast", "canadacentral", "northeurope", "australiaeast", "switzerlandnorth", "germanywestcentral", "eastasia", "francecentral"],
      "metadata": { "description": "Region" }
    },
    "storageAccountName": {
      "type": "string",
      "metadata": { "description": "Storage Account Name" }
    },
    "userName": {
      "type": "string",
      "metadata": { "description": "Username of primary user" }
    },
    "homeDirectory": {
      "type": "string",
      "metadata": { "description": "Home directory of primary user. Should be a container." }
    }
  },
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2021-02-01",
      "name": "[parameters('storageAccountName')]",
      "location": "[parameters('location')]",
      "sku": {
        "name": "[parameters('storageAccountType')]"
      },
      "kind": "StorageV2",
      "properties": {
          "isHnsEnabled": true,
          "isSftpEnabled": true
      },
      "resources": [
        {
          "type": "blobServices/containers",
          "apiVersion": "2021-02-01",
          "name": "[concat('default/', parameters('homeDirectory'))]",
          "dependsOn": ["[parameters('storageAccountName')]"],
          "properties": {
            "publicAccess": "None"
          }
        },
        {
          "type": "localUsers",
          "apiVersion": "2021-02-01",
          "name": "[parameters('userName')]",
          "properties": {
            "permissionScopes": [
                {
                  "permissions": "rcwdl",
                  "service": "blob",
                  "resourceName": "[parameters('homeDirectory')]"
                }
            ],
            "homeDirectory": "[parameters('homeDirectory')]",
            "hasSharedKey": false
          },
          "dependsOn": ["[parameters('storageAccountName')]"]
        }
      ]
    }
  ],
  "outputs": {
    "defaultContainer": {
      "type": "string",
      "value": "[parameters('homeDirectory')]"
    },
    "user": {
      "type": "object",
      "value": "[reference(
        resourceId('Microsoft.Storage/storageAccounts/localUsers', parameters('storageAccountName'), parameters('userName'))
      )]"
    }
  }
}

Template for SSH Implementation

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "storageAccountType": {
      "type": "string",
      "defaultValue": "Standard_LRS",
      "allowedValues": ["Standard_LRS", "Standard_ZRS"],
      "metadata": { "description": "Storage Account type" }
    },
    "location": {
      "type": "string",
      "defaultValue": "northeurope",
      "allowedValues": ["westeurope", "northcentralus", "eastus2", "eastus2euap", "centralus", "canadaeast", "canadacentral", "northeurope", "australiaeast", "switzerlandnorth", "germanywestcentral", "eastasia", "francecentral"],
      "metadata": { "description": "Region" }
    },
    "storageAccountName": {
      "type": "string",
      "metadata": { "description": "Storage Account Name" }
    },
    "userName": {
      "type": "string",
      "metadata": { "description": "Username of primary user" }
    },
    "homeDirectory": {
      "type": "string",
      "metadata": { "description": "Home directory of primary user. Should be a container." }
    },
    "publicKey": {
      "type": "string",
      "metadata": { "description": "SSH Public Key for primary user." }
    }
  },
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2019-06-01",
      "name": "[parameters('storageAccountName')]",
      "location": "[parameters('location')]",
      "sku": {
        "name": "[parameters('storageAccountType')]"
      },
      "kind": "StorageV2",
      "properties": {
          "isHnsEnabled": true,
          "isLocalUserEnabled": true,
          "isSftpEnabled": true
      },
      "resources": [
        {
          "type": "blobServices/containers",
          "apiVersion": "2019-06-01",
          "name": "[concat('default/', parameters('homeDirectory'))]",
          "dependsOn": ["[parameters('storageAccountName')]"],
          "properties": {
            "publicAccess": "None"
          }
        },
        {
          "type": "localUsers",
          "apiVersion": "2019-06-01",
          "name": "[parameters('userName')]",
          "properties": {
            "permissionScopes": [
                {
                  "permissions": "rcwdl",
                  "service": "blob",
                  "resourceName": "[parameters('homeDirectory')]"
                }
            ],
            "homeDirectory": "[parameters('homeDirectory')]",
            "sshAuthorizedKeys": [
              {
                "description": "localuser public key",
                "key": "[parameters('publicKey')]"
              }
            ],
            "hasSharedKey": false
          },
          "dependsOn": ["[parameters('storageAccountName')]"]
        }
      ]
    }
  ],
  "outputs": {
    "defaultContainer": {
      "type": "string",
      "value": "[parameters('homeDirectory')]"
    },
    "user": {
      "type": "object",
      "value": "[reference(
        resourceId('Microsoft.Storage/storageAccounts/localUsers', parameters('storageAccountName'), parameters('userName'))
      )]"
    },

    "keys": {
      "type": "object",
      "value": "[listKeys(resourceId('Microsoft.Storage/storageAccounts/localUsers', parameters('storageAccountName'), parameters('userName')), '2019-06-01')]"
    }
  }
}

Once you have deployed the template, you can go to the portal to configure the user permission:

Remember to keep the password, without that you can’t be able to connect to the SFTP

And now, you can connect to the SFTP via PS or other preferred tool

And play with some of the files:

We can check the blob itself to review the information about the recent uploads:

As you have seen, now you’re able to deploy your SFTP for Azure Blob Storage without worrying about Container Solutions or other weird experiments.

Till next time!

PassWordless Authentication with Fido 2 Keys

This is something I wanted to test some time ago, and now thanks to Feitian I was able to do it. So let’s dig into detail what is passwordless with Fido2 Keys, how we can configure it in AzureAD, and what advantages provide as an end user. ¡Let’s begin!

But before dig in depper, let me explain the basics: A security key is a piece of hardware that you can connect to your computer or phone to verify your credentials when logging, unlike a password, it’s completely safe, because the configuration is different for each system.

So, what does Fido2 Keys? As you probably know, logging into a resource requires a username and password, and with MFA, it usually requires a username/password combination plus one other authentication factor, like a time-based one-time password. In this case, FIDO2 is a standards-based method of user authentication that is passwordless, supporting PIN and biometrics in security tokens

For starters, with FIDO you can:

  • Improve security with crypto-secured passwordless authentication
  • Remove the helpdesk costs associated with forgotten passwords by replacing them with a simple PIN or fingerprint
  • Remove the user-experience annoyances of long passwords to create, remember and reset so that your workforce can get on with their role simply and seamlessly.

What about the preparation of AzureAD?

For IT, At high level there is only two tasks to accomplish:

  • Enable the new authentication method registration on AzureAD
  • Enable FIDO2 as an authentication method

Easy, isn’t it?

What about the registration for end users?

In my case, how the security Key is a biometic security Key, what i needed to do first is to register my fingerprint. Once I did this (manufacturer provide details, you’re ready to go with next steps).

In order to register the security token with AzureAD, the user will need to access to https://aka.ms/setupsecurityinfo where will be able to see all the authentication method available for them:

And once the user have selected the security key option, the process of registration will begin. In my case, I selected USB device and then… I needed to provide a PIN for the security Key:

Things that you have to keep in mind, is we user have to set up their own PIN to use their key, it cannot be enforced or centralized way to manage PIN, so is probably that your users end up using PINs like 123456.

ONce you have registered the key, it will appear in the security Info Panel:

Ok, it’s great what you’re are explaining, but how it is used?

With the following video, I want to show how the process of passwordless authentication in AzureAD is done:

As you can see, the login was done without entering any user or password. If you’re conviced, and you want to start deploying Fido2 Keys in your organization, think first about the following points:

Registration

  • Control to ensure that the employee has been through sufficient identity checks to create a trusted identity.

Issuance

The organisation needs policy control over:

  • The type of FIDO device used (external USB / Bluetooth)
  • The organisation needs to consider the type of user verification required (Fingerprint / NFC)
  • The end user needs a simple experience during registration of a FIDO credential
  • The organization needs to trust the genuineness of the FIDO device being used for the FIDO credential

Lifecycle Management

  • Vision of who has been assigned which FIDO Credentials
  • Ability to simply revoke access to all systems accessed by the FIDO Credential
  • Ability to manage lost devices / replacement devices / back up devices

Authentication

  • The end user needs a simple experience to authenticate to systems, usernameless aids this process.

As you can see Fido2 Keys are great, and what is better, not only works with AzureAD, it can be used to authenticate with oter services like twitter, Instagram, etc…

Link References:

Register your key at https://aka.ms/mysecurityinfo

If you are a Microsoft 365 admin, use an interactive guide at https://aka.ms/passwordlesswizard

Messing around with AVD and AADJoin

In a previous post: Messing around with WVD, AADDS and FSLogix – Albandrod’s Memory (albandrodsmemory.com) I was talking about how AVD breaks some scenarios and how we could fix them.

In this ocassion I will talk about my experience working with the new version of AADJoin for AVD which is finally in public preview. So with this approach we can eliminate the need to have a domain controller or AADDS in place for your AVD deployment to work, but as you can imagine it has some drawbacks.

First important thing that you have to be aware of implementing this type of scenario is that when you’re adding the VMs to the HP, it is necessary to select the following option:

Also is important to check wether is we want to join the VMs to Intune or not, in my case I selected yes, and after a few moments of the VM creation, I was able to see it in the endpoint portal:

After you have created the HP, my recommendation would to configure it, you can use the following advanced RDP properties:

use multimon:i:0 which basically Determines whether the session should use true multiple monitor support when connecting to the remote computer

To access Azure AD-joined VMs using the web, Android, macOS, iOS, and Microsoft Store clients, you must add targetisaadjoined:i:1 to the HP. These connections are restricted to entering user name and password credentials when signing in to the session host.

But, what is more important for me, and it was driving me crazy at first, it was the authantication in AVD AADJoined:

The following configurations are currently supported with Azure AD-joined VMs:

  • Personal desktops with local user profiles.
  • Pooled desktops used as a jump box. In this configuration, users first access the Azure Virtual Desktop VM before connecting to a different PC on the network. Users should not save data on the VM.
  • Pooled desktops or apps where users don’t need to save data on the VM. For example, for applications that save data online or connect to a remote database.

So, don’t break your head trying to authenticate with your current user as in WVD Joined Domain, you will need to use a Local profile for AzureAD Joined VMs, if not you will receive an error like the following which will drive you nuts:

But after using the local user in the VM you will be able to log in the VM.

Once you log in to the VM, you can check the dsregcmd to see the status:

And also how the machine is enrolled in Intune, you can check the information regarding the enterprise registration 🙂

For me AVD AADJoin, it is a pseudo Windows365 but with custom images and without paying the full license to access to the resource itself. The other things about AVD and AADJoin are pretty the same as Domain Joined, so have fun with them

Till next time!

Messing around with WVD, AADDS and FSLogix

In a project where WVD was involved, we needed to implement AADDS and FSlogix to the scenario. If you take a look to that scenario, it is pretty simple, but it hides some stones that we hit during the road, so I want to explain them in this post 😊

First of all, once you have deployed the AADDS, remember to check DNS settings in the VNet, it is necessary to put the DNS from the AADDS, otherwise won’t be possible to join VMs to the AADDS domain:

Once the AADDS instance was deployed it took turn for the golden image, as you probably know there is no problem to install all the programs and updates, but our stone here was once we deployed the language pack and the image was prepared, the sysprep was crashing, so we need to deep dive into the logs to solve the problem…

So the deployment begun to be fun, but after digging, we were able to solve by executing…

Remove-AppxPackage -Package Microsoft.LanguageExperiencePackes-ES_19041.17.51.0_neutral__8wekyb3d8bbwe -AllUsers

And then… boom!

Probably you will need to change your package in your case but is important to include the -allusers parameter.

Solved the golden image problem, it take turn to the deploy the host pool which process was straightforward. Our next stone was the storage account… ☹

Deploying the storage account into the AADDS was easy, but the problem was to give NTFS permission to the users, we were used to do that process in ADDS scenarios, so we know what to do, but with AADDS the procedure changes a bit…

So my piece of advice, would be to follow the instructions given in docs: Uso de Azure AD Domain Services para autorizar el acceso a los datos de archivo a través de SMB | Microsoft Docs

We were using the AAD credentials and we were stuck for a while until we read this in the documentation. Lesson learned, read documentation help.

Once you have entered to the storage account with your storage account key, you are able to give NTFS permission to the users (please follow instructions from docs xD)

Once we solved this, we were in position to configure FSLogix for the mobility of the profiles. For those who do not know FSLogix it allows to store both user profiles and applications on a centralized file share. This is extremely useful in virtual desktop environments, as the user’s profile does not have to be copied prior to boot. FSLogix will mount those profiles hosted on a file share and will make them appear local.

But again, once we have configured the entry in the VM registry:

We hit another stone… because we were logging into the WVD remote desktop and it didn’t create any profile on the Storage account, after digging and asking ourselves, we decided to go to FSLOgix logs located here: %ProgramData%\FSLogix\Logs. We checked the profile logs and found the following:

Configuration setting not found: SOFTWARE\FSLogix\Profiles\AttachVHDSDDL. Using default:
[17:33:52.257][tid:00000c4c.00000e74][INFO] Session configuration wrote (REG_SZ): SOFTWARE\FSLogix\Profiles\Sessions\S-1-5-21-1901185187-4119977032-3365905087-1004\AttachVHDSDDL = ‘D:AI(A;;GA;;;SY)(A;;GA;;;BA)(A;;GA;;;BU)(A;;GA;;;WD)(A;;GA;;;RC)(A;;GA;;;AC)S:(ML;;NW;;;LW)’
[17:33:52.273][tid:00000c4c.00000e74][INFO] Status set to 0: Success
[17:33:52.273][tid:00000c4c.00000e74][INFO] Reason set to 3: A local profile for this user exists on this system
[17:33:52.273][tid:00000c4c.00000e74][WARN: 00000003] Local profile already exists. Do nothing. (El sistema no puede encontrar la ruta especificada.)

Probably you will asking yourself what kind of error is that? It is simple, your local profile is messing with the network profile being created, so what we had to do is to remove the local profile. You can do that by going into advanced system settings and deleting the profile

We did that, and we tried again and booooooom! The profile was created in the storage account:

After doing that, we were in position to do all the test in WVD and then di all the steps to create and enterprise environment (optimization, monitoring, a “true” golden image, hide the power button, etc…).

Till nex time!

Swap OS disk to storage account

Quick post to remember what actions have to be made to swap your OS disk to a VHD disk in a storage account (yes swapping from MD to UMD, I know probably I’m crazy, but for golden images it is great).

But imagine that you have a VM running a MD disk and you need to swap that OS Disk with and UMD… how can you that?

# Get the VM 
$vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myVM 

# Make sure the VM is stopped\deallocated
Stop-AzVM -ResourceGroupName myResourceGroup -Name $vm.Name -Force

# Set the VM configuration to point to the new disk
Set-AzVMOSDisk -VM $VirtualMachine -Name "osDisk.vhd" -VhdUri "https://mystorageaccount.blob.core.windows.net/disks/osdisk.vhd"

# Update the VM with the new OS disk
Update-AzVM -ResourceGroupName myResourceGroup -VM $vm 

# Start the VM
Start-AzVM -Name $vm.Name -ResourceGroupName myResourceGroup

That’s all! Your VM is running with vhd disk 🙂

Log Analytics Best Practices

Hi! You probably know that I am a fan of Log Analytics, so with this post I want to share with you what are my thoughts about best practices while designing and setup of Log Analytics in several deployments, let’s roll!

  • Use as few workspaces as possible: At the beggining I was using several workspaces (each one for subscription), but in the practice it is more useful to only have one. (The only thing to have separate workspaes would be money and retention). and if you want to control cost, use the table level retention feature!
  • For Long term retention move data to Storage Account 🙂
  • Use one WS for each region: depending in where are you working and laws, would be advisable to have different WS across region (EMEA, APAC, EEUU…)
  • Use Azure Policies to install the Monitoring Agents 🙂 it is very useful
  • Define proper RBAC: depending in which information you are ingesting to Log Analytics, will be important to some people have access to certain data.
  • Setup Alerting for events: Yes you are collecting a huge amount of data, but… are you creating alerts and monitoring rules for those important services?
  • Control the cost: It is easy to set up Log Analytics, but to put verbose data for all those services it is also easy, so your main goal would be to tweak the source of the data and the amount of information that you’re ingesting to log analytics

And finally, the last piece of information… keep an eye to the Log Analytics roadmap, to be updated is my daily nightmare, so… be patient with this

till next time!