Deciphering the Azure Saving plan

Current scenario

Until now in Azure, if we talked about how to save costs in computing services, we had two alternatives (without going into software discounts):

  • Azure Reservations: They help us save money by booking a particular VM size for 1 or 3 years. The cost savings can be up to 72% (official figures from the manufacturer, which in my experience of 40% has not passed) compared to Azure prices in PAYG format. In this case, when we made a reservation of an instance, it does not affect the state of our resources, but we made the reservation against a specific size, obtaining the discount automatically if it matches our resources.
  • Spot Virtual Machines: This type allows us to have a machine for computing at a lower cost than normal, but with one condition: We do not have SLA, when Azure needs computing capacity, the first thing that will be rescinded are the Spot type machines, with which we would be left without the ability to have computing resources for this type of sizes.

But during Ignite ’22, a third avenue for computing resources was announced: Azure Saving Plans

And what does it provide me?

It allows us to save computing costs based on a fixed price per hour. In this case, a Saving Plan can save up to 65% the price compared to an Azure price in PAYG format (manufacturer figures, which in my experience of 30% has not exceeded), always depending on the term we choose (from 1 to 3 years).

And what is the main difference?

Basically in the way of reserving the resource, if for an Instance Reservation, for example, we reserve a size D4v4 in West Europe for one year, with Azure Saving Plan, what we do is set a fixed spending rate for a certain term (from 1 to 3 years without the possibility of cancellation), so that any computing resource that falls within the scope we have chosen can make use of that commitment and This saves us computing money from these resources.

How does it work?

Basically, we have to specify the amount of fixed money we want to spend per hour of computing, and automatically, all the resources that are contained within the scope of creation

Therefore, it is extremely important to keep in mind that this type of solutions do not fit with everyone, since not all of us have a large amount of computing resources that involve a fixed cost for our organization.

Likewise, we must specify how long we want to have this commitment (1 or 3 years) and the form of payment (monthly or annual)

When trying to create an ASP, the portal will offer us different alternatives to configure our ASP depending on the computation consumption we have, from the most conservative to the most aggressive strategy (although manually, we can also configure how much we are willing to pay per hour)

Once we have created the ASP, the party begins: How do I know that I am applying the ASP to my resources? The answer is simple, you must trust 😛

A sample of how it works is the following image:

If you look, the green line represents the amount of money I am paying in a fixed way every hour (remember that this is 24×7, so if we put € 5 / h they end up being approximately € 3600 per month), whether or not I use computing resources.

This last sentence is very important, whether or not you use resources, what does this mean? That, if I use 100% of my computing resources, and the price / hour is less than those € 5, I will pay € 5 / h yes or yes. On the other hand, if my 100% of computing resources / hour is greater than those € 5, I will pay € 5 in fixed format (which already contains a certain discount), and the remaining € 1, I will pay it at PAYG price (remember depending on the contract I have).

So here, we enter different price scales:

  1. Scenario 1: In a certain time slot, I go below my set price à I pay my price per hour
  2. Scenario 2: In a certain time slot, my computing consumption is what I have set in the ASP à I pay my price per hour
  3. Scenario 3: In a certain time slot, my computing consumption is greater than the ASP created à I pay my price per hour + PAYG price not covered by the ASP

This is important to understand, because savings are automatically applied every hour, regardless of region, instance series, or OS.

What resources are contained in this type of solution?

As I write the article, different Azure resources are coming into play such as:

  1. Azure VMs (excludes A, G, and GS series)
  2. Container instances
  3. Azure Functions con Plan Premium
  4. Azure App Service with Premium v3 or Isolated v2 Plan
  5. Azure Dedicated Hosts

This does not mean that other resources will be included in the future, but I do not have more information.

And can I combine it with instance reservations?

Yes, without problem, in fact it is the most suitable formula to save costs, in this case, the instance reservations would always enter first, and everything that does not cover the instance reservation, would be subject to be covered by an ASP:

As we can see, everything that is not covered by the instance reservation or an ASP, would be paid at the normal compute price that we have established in Azure (here it will depend on the type of contract we have with Microsoft EA / CSP / PAYG)

And this ASP thing appears in Azure Advisor?

Yes, it should already appear in the Advisor as a saving measure for the compute services contained in Azure along with the reserves of instances

We must even realize that this option already appears in the Azure calculator:

Once I have made a commitment with ASP, do I have the possibility to cancel and/or change it?

No, it is not possible to cancel an ASP commitment, or change it for another, we will have to endure 1-3 years what we have configured, and if we fall short, we will have to configure a new ASP to cover the new demand (with the increase in time of this new ASP that supposes)

What you can do is switch from an Azure Instance Reservation to an Azure Service Plan Self-service trade-in for Azure savings plans – Microsoft Cost Management | Microsoft Learn But not from ASP to RI.

Any recommendations for creating an ASP?

My personal recommendation is to always go to a more conservative configuration, more than anything because of the non-possibility of being able to cancel this type of commitments, so this will give us the opportunity to “play” with other configurations.

Summarizing

Reservations only apply to computing resources that have been identified and to a specific region

Azure Saving plan applies to all compute resources that are contained within that scope, so they provide us with greater flexibility and automatic optimization against reservations.

When to choose one or the other?

  1. For compute resources with dynamic loads: Azure Saving Plans
  2. For resources that are stable over time and run continuously, or don’t think about resizing: Azure Reservations

There is no one-size-fits-all formula, but the FinOps perspective is like this 😊.

Additional information about Azure Service Plan at: What is Azure savings plans for compute? – Microsoft Cost Management | Microsoft Learn

Advertisement

Best Practices about how to cut costs in Azure

Introduction

There is no one sizes fits when it comes to Azure and cost optimization, but the focus of this session is to explain some tips & tricks during my daily life as a Cloud Solutions Architect​

Some general tasks can be done monthly/quarterly to be sure that you Azure environment is up to date, taking into consideration that the optimization and your business run are the most important things here​

Be advised that not all the things that can be done in Azure are being covered in this post, probably because at the time of the writing I didn’t have to

Why this post?

Every design in Azure has cost implications, before architecting something, we must consider the budget that we will need for the Project itself, taking into consideration thinks like:

  • Identity different boundaries for scale up
  • Redundancy
  • BCP taking into consideration the cost of the solution
  • Design and set up scalable architectures, focusing on metrics & performance
  • Start small and scale out as soon as the required performance needs it​ (I really love that one)
  • Choose PaaS and SaaS over IaaS, pay only for what you use as a consumer​
  • Always, monitor, Audit & optimize the cost related

Ok I get it, but what are we going to cover?

For the next minutes I will explain some guidelines about cost optimization, in particular for the following topics:

  • Use of ARI
  • Use of Dev subscriptions
  • Optimal use of Azure App Services​
  • Optimal use of Auto-Scale in App Services​
  • Azure Data Factory Failed Pipelines
  • PaaS SQL Optimization
  • Cosmos DB
  • VM Right Sizing
  • Azure Hybrid Benefit​
  • Blob Storage Lifecycle​
  • Networking
  • Clean Orphan Resources​
  • RIs​
  • Use of Log Analytics
  • Use of Azure Advisor
  • Cost Management Preview (ACO Insights)
  • Azure Governance Dashboard
  • Closing

Before starting…

Before starting this post, I would recommend to create an Azure Inventory from your environment, with this tool, it is pretty simple: https://github.com/microsoft/ARI

And as you can observe, it will give you a great overview of what type of resources are you having, which use, locations, etc…Also, some of the sheets can be used to optimize your Azure Cost environment

Also, another tip that I want to give, is you can start your journey to Cost Optimization with a self-done assessment, but it can give you some guidelines about where are you: https://docs.microsoft.com/en-us/assessments/

Use of Dev Subscriptions

Using the top-to-bottom approach, the first thing to pay attention to is Azure Dev/Test subscriptions, which are applicable for both enterprise and pay-as-you-gooffers. By placing your dev resources in those subscriptions, you will get lower prices for most common Azure services for the cost of excluding them from the regular vendor SLA commitments. 

Optimal use of Azure App Services
First, check that standard Plans and Premium plans has an associated application​

I have seen a lot of empty App Services Plan, which leads to unnecessary cost to the customer, remember that having a right governance in your subscriptions it is algo a cost measure.

Another thing that I tend to do is to check the metrics for the plan, and check if are being used properly (scale down in case is needed, but remember the features needed in each case)

Optimal use of Auto-Scale in App Services​

In my case, I can be able to scale down my resources, but first check features between standard and premium plans (or even between std o premium!). Also what it can be done is to scale up/down based on a schedule https://techcommunity.microsoft.com/t5/apps-on-azure-blog/azure-app-service-automatic-scaling/ba-p/2983300

Very useful for those workloads where we know that only are needed in certain periods of time

Azure Data Factory​

Review the failing pipelines​, if a pipeline is constantly failing to run, probably it will impact into the cost of your resource, so take action

Again, I have reviewed a lot of Pipelines in DF which are continuosly failing… take care of that as well

Paas SQL Optimization​

With monitor, check if the database needs all the DTUs provisioned, one thing I love to do, is to play with the different available plans for the SQL, if you’re running a version with a lot of DTU’s, implement a runbook in order to reduce the plan when you don’t need it​, for example, you can use: GitHub – francesco-sodano/azure-sql-db-autoscaling: This ARM Template deploys an Azure SQL Database with DTU Consumption plan (with a new Azure SQL Server) including all the resources required to perform Auto Scaling (scale up and scale down) based on Metric Alerts using a function app. Again, very useful for those workloads where we know that only are needed in certain periods of time

Focus on those DBs with 40%-80% of the DTU capacity​, those are the most imporant to be scale up

Check if you really need the Georeplication, probably you don’t need to replicate your DB across regions (important point!!!), remember the first bullets of this post, we need to start small and then plan big, if we start to put all the georeplication modes to DBs that are not being use or for those in test, you’re wasting your money

Cosmos DB

With the help of metrics, review the use for the correct size & throughput ​

Consider autoscaling for those type of resources (it avoids consuming unnecessary resources)​

Consider serverless options for Dev & Test environments or those environments where intermittent traffic is it used: Consumption-based serverless offer in Azure Cosmos DB | Microsoft Learn

VM Right Sizing​
One thing I love to do, is to shutdown VMs based in a Schedule​

The Schedule is set up with a tag in the resource, and the operation is done by an Automation Account (it could be a Logic App as well)​. For example, I love to use the following script:

Scheduled Virtual Machine Shutdown/Startup – Microsoft Azure | Automys

You can setup the following tag in the VMs

And your VMs will automatically shutdown and start in the configured schedule, which for those test and PRE environments where Azure Reservations does not fit, are simply great, you will save a bunch of computation hours with this simply script

In order to cut costs, we can use spot VMs for non priority tasks (it helps to save some money vs other azure VM sizes), you can get more info in: Use Azure Spot Virtual Machines – Azure Virtual Machines | Microsoft Learn

Get rid of those old VM sizes

One thing that I do for all of my clients in order to optimize cost, is to check which version size are running for the VMs, this can be extracted from the ARI (remember the first tool):

Why? Because as you probably know, Microsoft is always optimising hardware in the Datacenter, so they are pulling new version of the VM size, so what’s the point? the older is the VM size, higher VM cost, so check out if there is any new VM size, and you will be able to save some money from each VM size.

Imagine that you have 100 VMs running in a v2 series, and changing from v2 to v5, represents a change in cost of 20€/VM/month, so in total the save is 2000€/month with only changing the VM to a newer version, not bad uh?

Azure Hybrid benefit

First question is: Do you have a software assurance with Microsoft? If the answer is yes, don’t waste more time and money, and apply it to your Azure Resources, it Will help to sabe up to 40% in cost (for VMs and SQL)

If you want to know how much you can save with this, you can use the Azure Calculator for this purpose: https://azure.microsoft.com/en-us/pricing/hybrid-benefit/#calculator

Storage Lifecycle

With this procedure I was able to save a lot of money in a recent IoT Project, all the information was stored in blobs, but once a certain period of time passed, we moved the information from one tier to another in order to cut storage costs

Networking

Check out costs related with networking, it may scary you​

You Will need to identify which applications are using most of the egress bandwidth and review & redesign your infrastructure accordingly​

Check which gateways are not being used, probably those which have a throughput lower tan 900MB/day​

Check you Azure Express Route Circuits, probably the first provision of the circuit was greater than needed

So, check Azure Monitor: Monitor – Microsoft Azure

Clean Orphan Resources

Are you sure that everything that you have in your subscription are being used? Use this workbook and take action in your subscription: Azure Orphan Resources (microsoft.com)

Save Azure costs deleting those unused disks, Public IP’s which are consuming Storage and account cost (remember that in Azure Advisor we have these recommendations as well):

I’m sure that you will save a bunch of €€€ with this procedure

Use of Log Analytics

If you’re using Log Analytics to monitor your Azure Resoruces, you should add a Daily Cap into your Log Analytics Workspace: https://learn.microsoft.com/en-us/azure/azure-monitor/logs/daily-cap#view-the-effect-of-the-daily-cap

Also a few tips:

  • Use Azure Monitor Agent and Data Collection rules over Log Analytics agent
  • Set retention per table and leave the workspace retention to its default
  • Set archival tier per table – To meet certain compliance rules, you may need some of the data available for a longer period of time
  • Configure diagnostic settings with only the logs that are needed and used

Use of Azure Advisor
I must admit that I’m a fan of Azure Advisor, for any Project that i have, always i tend to revise Advisor​ in order to cut Azure costs

It helps to detect if a Virtual Machine runs on a VM size GREATER than what it needs (based on CPU utilization under 5% in the last 14 days). If the Azure Advisor reports an overprovisioned machine, you need to investigate its use and resize it to a more suitable size.​

For this VM rightsizing purpose, I also use a script from Jos Lieben, which helps to put your underused VM in the right size in terms of load: Automatic modular rightsizing of Azure VM’s with special focus on Azure Virtual Desktop | Liebensraum

Reserved Instances

Reserved instances allow us to reduce cost, there are a lot of resources that can be reserved, take them into account when you’re designing your infrastructure

As you can see, there are a lot of Azure resources available to be reserved, make use of them 🙂

In Azure Advisor always recommend to reserve instances of our resources, don’t forget it​

Azure Budgets

Send notification when a certain amount of money is spent, this can be set at resource group or subscription level, and for example email to the application/subscription owner

Azure Cost Management

Remember to keep a closer look to the latest updates from Cost Management: https://azure.microsoft.com/en-us/blog/microsoft-cost-management-updates-november-2022/ I’m sure that you’ll take profit of those new features 😉

Insights, is the new feature of ACM allows us to have some insights about our daily spending in Azure resources, we can detect what is a tendency, and what is a cost anomaly in our subscriptions

Azure Governance Dashboard

If you want to deploy a High-Level Visualization in PowerBI of your azure resources, you can implement the CCO Dashboard from GitHub: https://github.com/Azure/ccodashboard

I know that this is more related with governance, but it helps to have a bird’s eye into the different resources and Azure subscriptions.

PRO TIP

If you really like those cost recommendations, there is a toll in github: https://github.com/helderpinto/AzureOptimizationEngine which can enchange the Azure Advisor recommendations and help you to optimize your environment

Closing

That’s all, probably some of the recommendations are already being followed by you, but I hope this post was interesting to you 😊

Till next time, merry Christmas and happy holidays!

How to stop Azure Application Gateway and Azure Firewall

Hi folks, summer is here and my holidays are very near, so I’m wrapping everything up to close my laptop and relax for a few weeks.

But before my deserved rest, I need to give you an FinOps advice:

If you’re like me and often makes demo setups in your Azure subscription that involve resources like Azure Firewall and Application Gateways, you probably have realize that there is no easy way to gracefully shutdown all those “hungry” resources to save some money.

To stop VMs, we can simply use the Azure Portal start/stop buttons, or use automation accounts or whatever, but Azure Portal doesn’t allow you to stop application gateway or Az Firewall. In such cases, Azure PowerShell helps:

# Get Azure Application Gateway
$appgw = Get-AzApplicationGateway -Name "appgw_name" -ResourceGroupName "rg_name"
 
# Stop the Azure Application Gateway
Stop-AzApplicationGateway -ApplicationGateway $appgw
 
# Start the Azure Application Gateway
Start-AzApplicationGateway -ApplicationGateway $appgw

After executing the stop, we will be able to see that the Operational State change after 1 minute or so:

and for the AzFirewall we can use the following:

$firewall=Get-AzFirewall -ResourceGroupName rgName -Name azFw
$firewall.Deallocate()
$firewall | Set-AzFirewall

$vnet = Get-AzVirtualNetwork -ResourceGroupName rgName -Name anotherVNetName
$pip = Get-AzPublicIpAddress -ResourceGroupName rgName -Name publicIpName
$firewall.Allocate($vnet, $pip)
$firewall | Set-AzFirewall

Now, you know how to save some money using those resources and I’m able to go to holidays to rest a while

Happy holidays!

Recommendations for deploying a Jump Host in Azure

Probably you’re asking yourself what’s a jump host? So in simple words, is a virtual host which is not the same as you use daily to read e-mail, browse the web, install software, but is used to perform administrative tasks for one or multiple IT infrastructures.

These are some of the recommendations that I follow when I need to deploy a jump host in Azure. The first two, are the most important, you have to be sure of not doing any of these

  • Do NOT install any productivity tools such as Office, it’s important to keep the VM as clean as possible, it’s only a considered to be a jump Host, not a working device.
  • Do NOT use this VM for general internet browsing purposes

and other some recommendations…

  • Isolate the VM with NSG, only is need to access where it is really needed
  • Install the AntiMalware extension from Azure and configure Windows Defender Settings
  • If possible, configure JIT on the VM
  • Onboard the device in Microsoft Defender for Endpoint (if Possible)
  • Apply the Microsoft Security baseline
  • Enable Windows Defender Network Protection and Exploit Guard
  • Enable Virtualization based security, if you deployed a Gen 2 VM

That’s all, as always, these are my recommendations, probably you have different ones

First Impressions about Azure sFTP

SSH File Transfer Protocol is a very common protocol used by many customers for secured file transfer over a secure shell. Microsoft did not have a fully managed SFTP service in Azure, but now is it possible to do it with Azure Blob Storage.

So, you will be able to use an SFTP client to connect to that storage account and manage the objects inside and even specify permissions for each user.

But before beginning, you will need to register the SFTP feature in your subscription, to do that you have to type the following:

# Set the Azure context for the desired subscription
az account set --subscription "xxxx-xxxx-xxxx-xxxx"

# Check if the live tier feature is registered first
az feature show --namespace Microsoft.Storage --name AllowSFTP

# Register the live tier feature on your subscription
az feature register --namespace Microsoft.Storage --name AllowSFTP

Also, you can check that information in Preview Features option in the Azure Portal:

Once you have that, you will need to enable the hierarchical namespace in the storage account, note that you can’t enable that on an existing storage account…

BEFORE

AFTER

At the time of writing, I couldn’t create the FTPS service through the Azure Portal or event with template, when I select WestEurope as destination, in the future I’m sure that would be supported

Now, we can deploy the ARM template to the RG previously created in Azure, but, previously to do that, you have to decide if your user will connect through Password or a SSH Key. In my case, I decided to implement it with an ARM Template with SSH key, but first you need to generate an SSH key pair:

next I provide the two ARM templates for both types of implementation

Template FOR PASSWORD Implementation:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "storageAccountType": {
      "type": "string",
      "defaultValue": "Standard_LRS",
      "allowedValues": ["Standard_LRS", "Standard_ZRS"],
      "metadata": { "description": "Storage Account type" }
    },
    "location": {
      "type": "string",
      "defaultValue": "northeurope",
      "allowedValues": ["westeurope", "northcentralus", "eastus2", "eastus2euap", "centralus", "canadaeast", "canadacentral", "northeurope", "australiaeast", "switzerlandnorth", "germanywestcentral", "eastasia", "francecentral"],
      "metadata": { "description": "Region" }
    },
    "storageAccountName": {
      "type": "string",
      "metadata": { "description": "Storage Account Name" }
    },
    "userName": {
      "type": "string",
      "metadata": { "description": "Username of primary user" }
    },
    "homeDirectory": {
      "type": "string",
      "metadata": { "description": "Home directory of primary user. Should be a container." }
    }
  },
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2021-02-01",
      "name": "[parameters('storageAccountName')]",
      "location": "[parameters('location')]",
      "sku": {
        "name": "[parameters('storageAccountType')]"
      },
      "kind": "StorageV2",
      "properties": {
          "isHnsEnabled": true,
          "isSftpEnabled": true
      },
      "resources": [
        {
          "type": "blobServices/containers",
          "apiVersion": "2021-02-01",
          "name": "[concat('default/', parameters('homeDirectory'))]",
          "dependsOn": ["[parameters('storageAccountName')]"],
          "properties": {
            "publicAccess": "None"
          }
        },
        {
          "type": "localUsers",
          "apiVersion": "2021-02-01",
          "name": "[parameters('userName')]",
          "properties": {
            "permissionScopes": [
                {
                  "permissions": "rcwdl",
                  "service": "blob",
                  "resourceName": "[parameters('homeDirectory')]"
                }
            ],
            "homeDirectory": "[parameters('homeDirectory')]",
            "hasSharedKey": false
          },
          "dependsOn": ["[parameters('storageAccountName')]"]
        }
      ]
    }
  ],
  "outputs": {
    "defaultContainer": {
      "type": "string",
      "value": "[parameters('homeDirectory')]"
    },
    "user": {
      "type": "object",
      "value": "[reference(
        resourceId('Microsoft.Storage/storageAccounts/localUsers', parameters('storageAccountName'), parameters('userName'))
      )]"
    }
  }
}

Template for SSH Implementation

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "storageAccountType": {
      "type": "string",
      "defaultValue": "Standard_LRS",
      "allowedValues": ["Standard_LRS", "Standard_ZRS"],
      "metadata": { "description": "Storage Account type" }
    },
    "location": {
      "type": "string",
      "defaultValue": "northeurope",
      "allowedValues": ["westeurope", "northcentralus", "eastus2", "eastus2euap", "centralus", "canadaeast", "canadacentral", "northeurope", "australiaeast", "switzerlandnorth", "germanywestcentral", "eastasia", "francecentral"],
      "metadata": { "description": "Region" }
    },
    "storageAccountName": {
      "type": "string",
      "metadata": { "description": "Storage Account Name" }
    },
    "userName": {
      "type": "string",
      "metadata": { "description": "Username of primary user" }
    },
    "homeDirectory": {
      "type": "string",
      "metadata": { "description": "Home directory of primary user. Should be a container." }
    },
    "publicKey": {
      "type": "string",
      "metadata": { "description": "SSH Public Key for primary user." }
    }
  },
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2019-06-01",
      "name": "[parameters('storageAccountName')]",
      "location": "[parameters('location')]",
      "sku": {
        "name": "[parameters('storageAccountType')]"
      },
      "kind": "StorageV2",
      "properties": {
          "isHnsEnabled": true,
          "isLocalUserEnabled": true,
          "isSftpEnabled": true
      },
      "resources": [
        {
          "type": "blobServices/containers",
          "apiVersion": "2019-06-01",
          "name": "[concat('default/', parameters('homeDirectory'))]",
          "dependsOn": ["[parameters('storageAccountName')]"],
          "properties": {
            "publicAccess": "None"
          }
        },
        {
          "type": "localUsers",
          "apiVersion": "2019-06-01",
          "name": "[parameters('userName')]",
          "properties": {
            "permissionScopes": [
                {
                  "permissions": "rcwdl",
                  "service": "blob",
                  "resourceName": "[parameters('homeDirectory')]"
                }
            ],
            "homeDirectory": "[parameters('homeDirectory')]",
            "sshAuthorizedKeys": [
              {
                "description": "localuser public key",
                "key": "[parameters('publicKey')]"
              }
            ],
            "hasSharedKey": false
          },
          "dependsOn": ["[parameters('storageAccountName')]"]
        }
      ]
    }
  ],
  "outputs": {
    "defaultContainer": {
      "type": "string",
      "value": "[parameters('homeDirectory')]"
    },
    "user": {
      "type": "object",
      "value": "[reference(
        resourceId('Microsoft.Storage/storageAccounts/localUsers', parameters('storageAccountName'), parameters('userName'))
      )]"
    },

    "keys": {
      "type": "object",
      "value": "[listKeys(resourceId('Microsoft.Storage/storageAccounts/localUsers', parameters('storageAccountName'), parameters('userName')), '2019-06-01')]"
    }
  }
}

Once you have deployed the template, you can go to the portal to configure the user permission:

Remember to keep the password, without that you can’t be able to connect to the SFTP

And now, you can connect to the SFTP via PS or other preferred tool

And play with some of the files:

We can check the blob itself to review the information about the recent uploads:

As you have seen, now you’re able to deploy your SFTP for Azure Blob Storage without worrying about Container Solutions or other weird experiments.

Till next time!

Messing around with AVD and AADJoin

In a previous post: Messing around with WVD, AADDS and FSLogix – Albandrod’s Memory (albandrodsmemory.com) I was talking about how AVD breaks some scenarios and how we could fix them.

In this ocassion I will talk about my experience working with the new version of AADJoin for AVD which is finally in public preview. So with this approach we can eliminate the need to have a domain controller or AADDS in place for your AVD deployment to work, but as you can imagine it has some drawbacks.

First important thing that you have to be aware of implementing this type of scenario is that when you’re adding the VMs to the HP, it is necessary to select the following option:

Also is important to check wether is we want to join the VMs to Intune or not, in my case I selected yes, and after a few moments of the VM creation, I was able to see it in the endpoint portal:

After you have created the HP, my recommendation would to configure it, you can use the following advanced RDP properties:

use multimon:i:0 which basically Determines whether the session should use true multiple monitor support when connecting to the remote computer

To access Azure AD-joined VMs using the web, Android, macOS, iOS, and Microsoft Store clients, you must add targetisaadjoined:i:1 to the HP. These connections are restricted to entering user name and password credentials when signing in to the session host.

But, what is more important for me, and it was driving me crazy at first, it was the authantication in AVD AADJoined:

The following configurations are currently supported with Azure AD-joined VMs:

  • Personal desktops with local user profiles.
  • Pooled desktops used as a jump box. In this configuration, users first access the Azure Virtual Desktop VM before connecting to a different PC on the network. Users should not save data on the VM.
  • Pooled desktops or apps where users don’t need to save data on the VM. For example, for applications that save data online or connect to a remote database.

So, don’t break your head trying to authenticate with your current user as in WVD Joined Domain, you will need to use a Local profile for AzureAD Joined VMs, if not you will receive an error like the following which will drive you nuts:

But after using the local user in the VM you will be able to log in the VM.

Once you log in to the VM, you can check the dsregcmd to see the status:

And also how the machine is enrolled in Intune, you can check the information regarding the enterprise registration 🙂

For me AVD AADJoin, it is a pseudo Windows365 but with custom images and without paying the full license to access to the resource itself. The other things about AVD and AADJoin are pretty the same as Domain Joined, so have fun with them

Till next time!

Swap OS disk to storage account

Quick post to remember what actions have to be made to swap your OS disk to a VHD disk in a storage account (yes swapping from MD to UMD, I know probably I’m crazy, but for golden images it is great).

But imagine that you have a VM running a MD disk and you need to swap that OS Disk with and UMD… how can you that?

# Get the VM 
$vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myVM 

# Make sure the VM is stopped\deallocated
Stop-AzVM -ResourceGroupName myResourceGroup -Name $vm.Name -Force

# Set the VM configuration to point to the new disk
Set-AzVMOSDisk -VM $VirtualMachine -Name "osDisk.vhd" -VhdUri "https://mystorageaccount.blob.core.windows.net/disks/osdisk.vhd"

# Update the VM with the new OS disk
Update-AzVM -ResourceGroupName myResourceGroup -VM $vm 

# Start the VM
Start-AzVM -Name $vm.Name -ResourceGroupName myResourceGroup

That’s all! Your VM is running with vhd disk 🙂

Log Analytics Best Practices

Hi! You probably know that I am a fan of Log Analytics, so with this post I want to share with you what are my thoughts about best practices while designing and setup of Log Analytics in several deployments, let’s roll!

  • Use as few workspaces as possible: At the beggining I was using several workspaces (each one for subscription), but in the practice it is more useful to only have one. (The only thing to have separate workspaes would be money and retention). and if you want to control cost, use the table level retention feature!
  • For Long term retention move data to Storage Account 🙂
  • Use one WS for each region: depending in where are you working and laws, would be advisable to have different WS across region (EMEA, APAC, EEUU…)
  • Use Azure Policies to install the Monitoring Agents 🙂 it is very useful
  • Define proper RBAC: depending in which information you are ingesting to Log Analytics, will be important to some people have access to certain data.
  • Setup Alerting for events: Yes you are collecting a huge amount of data, but… are you creating alerts and monitoring rules for those important services?
  • Control the cost: It is easy to set up Log Analytics, but to put verbose data for all those services it is also easy, so your main goal would be to tweak the source of the data and the amount of information that you’re ingesting to log analytics

And finally, the last piece of information… keep an eye to the Log Analytics roadmap, to be updated is my daily nightmare, so… be patient with this

till next time!

Forced Tunneling in Azure

I am not an expert on networking, but sometimes while working in Azure, I have to face some different configurations in order to fulfill customer requirements.

In this case, my customer wanted to redirect all the Internet traffic on the VMs from Azure to OnPrem. Because If you don’t configure forced tunneling, Internet-bound traffic from your VMs in Azure always traverses from the Azure network infrastructure directly out to the Internet, without the option to allow you to inspect or audit the traffic. 

And you know… nowadays, unauthorized Internet access can potentially lead to security breaches…

So, to do that I was thinking in the way I used to do those kind of things, create a table route, redirect the 0.0.0.0/0 traffic to a NVA and done, but this case was not the same, because i needed to redirect all the traffic.

So in this case what is needed is the Forced Tunneling:

Diagrama que muestra la tunelización forzada.

With that configuration any connectio from midtier and backed it is redirected back to onpremises via VPN S2S, and then the traffic can be inspected or event restricted.

The magic to achieve that scenario is to use PowerShell, there is no option to do that with the UI, you can check the full procedure in the following link: Configure forced tunneling for Site-to-Site connections – Azure VPN Gateway | Microsoft Docs

But make special attention to the following parameter:

$LocalGateway = Get-AzLocalNetworkGateway -Name "DefaultSiteHQ" -ResourceGroupName "ForcedTunneling"
$VirtualGateway = Get-AzVirtualNetworkGateway -Name "Gateway1" -ResourceGroupName "ForcedTunneling"
Set-AzVirtualNetworkGatewayDefaultSite -GatewayDefaultSite $LocalGateway -VirtualNetworkGateway $VirtualGateway

With that you create the magic in networking, the other steps that are done in the article are more or less the same that I usually did in my implementations.

So take into account that and happy routing!