Deciphering the Azure Saving plan

Current scenario

Until now in Azure, if we talked about how to save costs in computing services, we had two alternatives (without going into software discounts):

  • Azure Reservations: They help us save money by booking a particular VM size for 1 or 3 years. The cost savings can be up to 72% (official figures from the manufacturer, which in my experience of 40% has not passed) compared to Azure prices in PAYG format. In this case, when we made a reservation of an instance, it does not affect the state of our resources, but we made the reservation against a specific size, obtaining the discount automatically if it matches our resources.
  • Spot Virtual Machines: This type allows us to have a machine for computing at a lower cost than normal, but with one condition: We do not have SLA, when Azure needs computing capacity, the first thing that will be rescinded are the Spot type machines, with which we would be left without the ability to have computing resources for this type of sizes.

But during Ignite ’22, a third avenue for computing resources was announced: Azure Saving Plans

And what does it provide me?

It allows us to save computing costs based on a fixed price per hour. In this case, a Saving Plan can save up to 65% the price compared to an Azure price in PAYG format (manufacturer figures, which in my experience of 30% has not exceeded), always depending on the term we choose (from 1 to 3 years).

And what is the main difference?

Basically in the way of reserving the resource, if for an Instance Reservation, for example, we reserve a size D4v4 in West Europe for one year, with Azure Saving Plan, what we do is set a fixed spending rate for a certain term (from 1 to 3 years without the possibility of cancellation), so that any computing resource that falls within the scope we have chosen can make use of that commitment and This saves us computing money from these resources.

How does it work?

Basically, we have to specify the amount of fixed money we want to spend per hour of computing, and automatically, all the resources that are contained within the scope of creation

Therefore, it is extremely important to keep in mind that this type of solutions do not fit with everyone, since not all of us have a large amount of computing resources that involve a fixed cost for our organization.

Likewise, we must specify how long we want to have this commitment (1 or 3 years) and the form of payment (monthly or annual)

When trying to create an ASP, the portal will offer us different alternatives to configure our ASP depending on the computation consumption we have, from the most conservative to the most aggressive strategy (although manually, we can also configure how much we are willing to pay per hour)

Once we have created the ASP, the party begins: How do I know that I am applying the ASP to my resources? The answer is simple, you must trust 😛

A sample of how it works is the following image:

If you look, the green line represents the amount of money I am paying in a fixed way every hour (remember that this is 24×7, so if we put € 5 / h they end up being approximately € 3600 per month), whether or not I use computing resources.

This last sentence is very important, whether or not you use resources, what does this mean? That, if I use 100% of my computing resources, and the price / hour is less than those € 5, I will pay € 5 / h yes or yes. On the other hand, if my 100% of computing resources / hour is greater than those € 5, I will pay € 5 in fixed format (which already contains a certain discount), and the remaining € 1, I will pay it at PAYG price (remember depending on the contract I have).

So here, we enter different price scales:

  1. Scenario 1: In a certain time slot, I go below my set price à I pay my price per hour
  2. Scenario 2: In a certain time slot, my computing consumption is what I have set in the ASP à I pay my price per hour
  3. Scenario 3: In a certain time slot, my computing consumption is greater than the ASP created à I pay my price per hour + PAYG price not covered by the ASP

This is important to understand, because savings are automatically applied every hour, regardless of region, instance series, or OS.

What resources are contained in this type of solution?

As I write the article, different Azure resources are coming into play such as:

  1. Azure VMs (excludes A, G, and GS series)
  2. Container instances
  3. Azure Functions con Plan Premium
  4. Azure App Service with Premium v3 or Isolated v2 Plan
  5. Azure Dedicated Hosts

This does not mean that other resources will be included in the future, but I do not have more information.

And can I combine it with instance reservations?

Yes, without problem, in fact it is the most suitable formula to save costs, in this case, the instance reservations would always enter first, and everything that does not cover the instance reservation, would be subject to be covered by an ASP:

As we can see, everything that is not covered by the instance reservation or an ASP, would be paid at the normal compute price that we have established in Azure (here it will depend on the type of contract we have with Microsoft EA / CSP / PAYG)

And this ASP thing appears in Azure Advisor?

Yes, it should already appear in the Advisor as a saving measure for the compute services contained in Azure along with the reserves of instances

We must even realize that this option already appears in the Azure calculator:

Once I have made a commitment with ASP, do I have the possibility to cancel and/or change it?

No, it is not possible to cancel an ASP commitment, or change it for another, we will have to endure 1-3 years what we have configured, and if we fall short, we will have to configure a new ASP to cover the new demand (with the increase in time of this new ASP that supposes)

What you can do is switch from an Azure Instance Reservation to an Azure Service Plan Self-service trade-in for Azure savings plans – Microsoft Cost Management | Microsoft Learn But not from ASP to RI.

Any recommendations for creating an ASP?

My personal recommendation is to always go to a more conservative configuration, more than anything because of the non-possibility of being able to cancel this type of commitments, so this will give us the opportunity to “play” with other configurations.

Summarizing

Reservations only apply to computing resources that have been identified and to a specific region

Azure Saving plan applies to all compute resources that are contained within that scope, so they provide us with greater flexibility and automatic optimization against reservations.

When to choose one or the other?

  1. For compute resources with dynamic loads: Azure Saving Plans
  2. For resources that are stable over time and run continuously, or don’t think about resizing: Azure Reservations

There is no one-size-fits-all formula, but the FinOps perspective is like this 😊.

Additional information about Azure Service Plan at: What is Azure savings plans for compute? – Microsoft Cost Management | Microsoft Learn

Advertisement

Best Practices about how to cut costs in Azure

Introduction

There is no one sizes fits when it comes to Azure and cost optimization, but the focus of this session is to explain some tips & tricks during my daily life as a Cloud Solutions Architect​

Some general tasks can be done monthly/quarterly to be sure that you Azure environment is up to date, taking into consideration that the optimization and your business run are the most important things here​

Be advised that not all the things that can be done in Azure are being covered in this post, probably because at the time of the writing I didn’t have to

Why this post?

Every design in Azure has cost implications, before architecting something, we must consider the budget that we will need for the Project itself, taking into consideration thinks like:

  • Identity different boundaries for scale up
  • Redundancy
  • BCP taking into consideration the cost of the solution
  • Design and set up scalable architectures, focusing on metrics & performance
  • Start small and scale out as soon as the required performance needs it​ (I really love that one)
  • Choose PaaS and SaaS over IaaS, pay only for what you use as a consumer​
  • Always, monitor, Audit & optimize the cost related

Ok I get it, but what are we going to cover?

For the next minutes I will explain some guidelines about cost optimization, in particular for the following topics:

  • Use of ARI
  • Use of Dev subscriptions
  • Optimal use of Azure App Services​
  • Optimal use of Auto-Scale in App Services​
  • Azure Data Factory Failed Pipelines
  • PaaS SQL Optimization
  • Cosmos DB
  • VM Right Sizing
  • Azure Hybrid Benefit​
  • Blob Storage Lifecycle​
  • Networking
  • Clean Orphan Resources​
  • RIs​
  • Use of Log Analytics
  • Use of Azure Advisor
  • Cost Management Preview (ACO Insights)
  • Azure Governance Dashboard
  • Closing

Before starting…

Before starting this post, I would recommend to create an Azure Inventory from your environment, with this tool, it is pretty simple: https://github.com/microsoft/ARI

And as you can observe, it will give you a great overview of what type of resources are you having, which use, locations, etc…Also, some of the sheets can be used to optimize your Azure Cost environment

Also, another tip that I want to give, is you can start your journey to Cost Optimization with a self-done assessment, but it can give you some guidelines about where are you: https://docs.microsoft.com/en-us/assessments/

Use of Dev Subscriptions

Using the top-to-bottom approach, the first thing to pay attention to is Azure Dev/Test subscriptions, which are applicable for both enterprise and pay-as-you-gooffers. By placing your dev resources in those subscriptions, you will get lower prices for most common Azure services for the cost of excluding them from the regular vendor SLA commitments. 

Optimal use of Azure App Services
First, check that standard Plans and Premium plans has an associated application​

I have seen a lot of empty App Services Plan, which leads to unnecessary cost to the customer, remember that having a right governance in your subscriptions it is algo a cost measure.

Another thing that I tend to do is to check the metrics for the plan, and check if are being used properly (scale down in case is needed, but remember the features needed in each case)

Optimal use of Auto-Scale in App Services​

In my case, I can be able to scale down my resources, but first check features between standard and premium plans (or even between std o premium!). Also what it can be done is to scale up/down based on a schedule https://techcommunity.microsoft.com/t5/apps-on-azure-blog/azure-app-service-automatic-scaling/ba-p/2983300

Very useful for those workloads where we know that only are needed in certain periods of time

Azure Data Factory​

Review the failing pipelines​, if a pipeline is constantly failing to run, probably it will impact into the cost of your resource, so take action

Again, I have reviewed a lot of Pipelines in DF which are continuosly failing… take care of that as well

Paas SQL Optimization​

With monitor, check if the database needs all the DTUs provisioned, one thing I love to do, is to play with the different available plans for the SQL, if you’re running a version with a lot of DTU’s, implement a runbook in order to reduce the plan when you don’t need it​, for example, you can use: GitHub – francesco-sodano/azure-sql-db-autoscaling: This ARM Template deploys an Azure SQL Database with DTU Consumption plan (with a new Azure SQL Server) including all the resources required to perform Auto Scaling (scale up and scale down) based on Metric Alerts using a function app. Again, very useful for those workloads where we know that only are needed in certain periods of time

Focus on those DBs with 40%-80% of the DTU capacity​, those are the most imporant to be scale up

Check if you really need the Georeplication, probably you don’t need to replicate your DB across regions (important point!!!), remember the first bullets of this post, we need to start small and then plan big, if we start to put all the georeplication modes to DBs that are not being use or for those in test, you’re wasting your money

Cosmos DB

With the help of metrics, review the use for the correct size & throughput ​

Consider autoscaling for those type of resources (it avoids consuming unnecessary resources)​

Consider serverless options for Dev & Test environments or those environments where intermittent traffic is it used: Consumption-based serverless offer in Azure Cosmos DB | Microsoft Learn

VM Right Sizing​
One thing I love to do, is to shutdown VMs based in a Schedule​

The Schedule is set up with a tag in the resource, and the operation is done by an Automation Account (it could be a Logic App as well)​. For example, I love to use the following script:

Scheduled Virtual Machine Shutdown/Startup – Microsoft Azure | Automys

You can setup the following tag in the VMs

And your VMs will automatically shutdown and start in the configured schedule, which for those test and PRE environments where Azure Reservations does not fit, are simply great, you will save a bunch of computation hours with this simply script

In order to cut costs, we can use spot VMs for non priority tasks (it helps to save some money vs other azure VM sizes), you can get more info in: Use Azure Spot Virtual Machines – Azure Virtual Machines | Microsoft Learn

Get rid of those old VM sizes

One thing that I do for all of my clients in order to optimize cost, is to check which version size are running for the VMs, this can be extracted from the ARI (remember the first tool):

Why? Because as you probably know, Microsoft is always optimising hardware in the Datacenter, so they are pulling new version of the VM size, so what’s the point? the older is the VM size, higher VM cost, so check out if there is any new VM size, and you will be able to save some money from each VM size.

Imagine that you have 100 VMs running in a v2 series, and changing from v2 to v5, represents a change in cost of 20€/VM/month, so in total the save is 2000€/month with only changing the VM to a newer version, not bad uh?

Azure Hybrid benefit

First question is: Do you have a software assurance with Microsoft? If the answer is yes, don’t waste more time and money, and apply it to your Azure Resources, it Will help to sabe up to 40% in cost (for VMs and SQL)

If you want to know how much you can save with this, you can use the Azure Calculator for this purpose: https://azure.microsoft.com/en-us/pricing/hybrid-benefit/#calculator

Storage Lifecycle

With this procedure I was able to save a lot of money in a recent IoT Project, all the information was stored in blobs, but once a certain period of time passed, we moved the information from one tier to another in order to cut storage costs

Networking

Check out costs related with networking, it may scary you​

You Will need to identify which applications are using most of the egress bandwidth and review & redesign your infrastructure accordingly​

Check which gateways are not being used, probably those which have a throughput lower tan 900MB/day​

Check you Azure Express Route Circuits, probably the first provision of the circuit was greater than needed

So, check Azure Monitor: Monitor – Microsoft Azure

Clean Orphan Resources

Are you sure that everything that you have in your subscription are being used? Use this workbook and take action in your subscription: Azure Orphan Resources (microsoft.com)

Save Azure costs deleting those unused disks, Public IP’s which are consuming Storage and account cost (remember that in Azure Advisor we have these recommendations as well):

I’m sure that you will save a bunch of €€€ with this procedure

Use of Log Analytics

If you’re using Log Analytics to monitor your Azure Resoruces, you should add a Daily Cap into your Log Analytics Workspace: https://learn.microsoft.com/en-us/azure/azure-monitor/logs/daily-cap#view-the-effect-of-the-daily-cap

Also a few tips:

  • Use Azure Monitor Agent and Data Collection rules over Log Analytics agent
  • Set retention per table and leave the workspace retention to its default
  • Set archival tier per table – To meet certain compliance rules, you may need some of the data available for a longer period of time
  • Configure diagnostic settings with only the logs that are needed and used

Use of Azure Advisor
I must admit that I’m a fan of Azure Advisor, for any Project that i have, always i tend to revise Advisor​ in order to cut Azure costs

It helps to detect if a Virtual Machine runs on a VM size GREATER than what it needs (based on CPU utilization under 5% in the last 14 days). If the Azure Advisor reports an overprovisioned machine, you need to investigate its use and resize it to a more suitable size.​

For this VM rightsizing purpose, I also use a script from Jos Lieben, which helps to put your underused VM in the right size in terms of load: Automatic modular rightsizing of Azure VM’s with special focus on Azure Virtual Desktop | Liebensraum

Reserved Instances

Reserved instances allow us to reduce cost, there are a lot of resources that can be reserved, take them into account when you’re designing your infrastructure

As you can see, there are a lot of Azure resources available to be reserved, make use of them 🙂

In Azure Advisor always recommend to reserve instances of our resources, don’t forget it​

Azure Budgets

Send notification when a certain amount of money is spent, this can be set at resource group or subscription level, and for example email to the application/subscription owner

Azure Cost Management

Remember to keep a closer look to the latest updates from Cost Management: https://azure.microsoft.com/en-us/blog/microsoft-cost-management-updates-november-2022/ I’m sure that you’ll take profit of those new features 😉

Insights, is the new feature of ACM allows us to have some insights about our daily spending in Azure resources, we can detect what is a tendency, and what is a cost anomaly in our subscriptions

Azure Governance Dashboard

If you want to deploy a High-Level Visualization in PowerBI of your azure resources, you can implement the CCO Dashboard from GitHub: https://github.com/Azure/ccodashboard

I know that this is more related with governance, but it helps to have a bird’s eye into the different resources and Azure subscriptions.

PRO TIP

If you really like those cost recommendations, there is a toll in github: https://github.com/helderpinto/AzureOptimizationEngine which can enchange the Azure Advisor recommendations and help you to optimize your environment

Closing

That’s all, probably some of the recommendations are already being followed by you, but I hope this post was interesting to you 😊

Till next time, merry Christmas and happy holidays!

Conditional Access Tips From the Trenches

I want to drop some lines about my experience deploying several projects of Azure Conditional Access

  • Always exclude your emergency accounts from the conditional access policies (remember if you don’t have an Emergency Account, you’re late), this is something that I always tell to my customers and I will never give up
  • Don’t enable new policies without communicate properly to the organization, and also to foresee the impact in the users (you will save a lot of tickets from the customer service)
  • Don’t enable policies that requires compliant or hybridAzureAdJoin devices without verifying the state of the devices in the Azure Portal (same as before, you will save tickets and system interruption from the end users)
  • Careful with including in the policies the application “all apps”, is it possible to have a disgusting surprise (in my case, it happened to me in Azure, I put some exclusions, but it seems that sometimes the portal calls randomly to other APIs that cannot be controlled (and do not exist in AAD), so the user received a block in the portal even if the policy makes sense for you)
  •  If you go ahead with the policy with “All Cloud Apps” for the policies bases on devices, be sure to exclude in the policy the app “Intune Enrollment” or you won’t be able to enrol new devices in the portal
  • It is very easy to include multiple cases in one policy, but if you want to troubleshoot of what is happening, is it easier to segment the policy in multiple policies. Eat the elephant bite by bite, we have to put in the balance having and managing several policies or be able to troubleshoot correctly.
  • Is it recommended to include a naming convention in your policies, in a bird’s eye, you will be able to know what is the use for each policy (user, device, administrator, guest)

So, this is all, probably you’re following most of these recommendations, but if not, don be a fool 😉

Till next time!

OATH Hardware Tokens for AzureAD

As you probably have been reading in my previous posts, I’ve been talking about FIDO2 keys, and how it can be used as a secondary authentication when signing in AzureAD.

Today, I want to talk about OATH hardware Tokens, known as Time-based One Time Password Tokens as well.

As you are aware, some authentication methods can be used as the primary factor when you sign in to an application or device, such as using a FIDO2 security key or a password. Other authentication methods are only available as a secondary factor when you use Azure AD Multi-Factor Authentication or SSPR.

The following table outlines when an authentication method can be used during a sign-in event:

But, OATH TOTP is an open standard that specifies how one-time password (OTP) codes are generated. OATH TOTP can be implemented using either software or hardware to generate the codes. OATH TOTP hardware tokens typically come with a secret key, or seed, pre-programmed in the token.

In this post, I will show you how the OTP C200 token from Feitian can be configured in Azure AD and how it works.

First of all, what you have to do is to register the key in Azure AD, in order to do this, you will need the Serial Number from the Key, and the secret key provided by the manufacturer, and then you need to create a CSV file with all the information:

Once you have done this, these keys must be input into Azure AD: Multifactor authentication – Microsoft Azure

Upload the file, and activate the key in the portal, once it is have been done, it will show you a screen like the following:

If you have any error during the upload, it will be shown in the portal itself:

You must consider that you can activate a maximum of 200 OATH tokens every 5 minutes.

Also, as you probably figure out, users may have a combination or OATH Hardware tokens, Authenticator App, FiDO Keys, etc…

Be aware that users con configure their default sign in method in the security info web: My Sign-Ins | Security Info | Microsoft.com

So, once the key has been configured for the user, which is the flow to access to the account?

I have compared the Authentication flow with the Fido2 Key Flow, the difference that you can appreciate is with FiDO2 Keys is not necessary to include my password

Finally, check out the following table from Microsoft, where you can see different persona cases and which passwordless technology can be used for each one of them

IMHO, FiDO Keys are great, but thinking as an end user they have problem: the first setup: We must rely on end user about how they configure the key and associate it with azure AD (remember the previous table). FiDO keys has the advantage to be able to be used to sign in instead of using a password in the computer.

In the other hand, OAUTH keys are great, because you as an administrator, can configure the keys in the AAD Portal, and once have been activated provide them to end users, without necessity to do any other action from the end user perspective, and the most important part, are very easy to use

Thanks to Feitian for providing such amazing tokens

How to stop Azure Application Gateway and Azure Firewall

Hi folks, summer is here and my holidays are very near, so I’m wrapping everything up to close my laptop and relax for a few weeks.

But before my deserved rest, I need to give you an FinOps advice:

If you’re like me and often makes demo setups in your Azure subscription that involve resources like Azure Firewall and Application Gateways, you probably have realize that there is no easy way to gracefully shutdown all those “hungry” resources to save some money.

To stop VMs, we can simply use the Azure Portal start/stop buttons, or use automation accounts or whatever, but Azure Portal doesn’t allow you to stop application gateway or Az Firewall. In such cases, Azure PowerShell helps:

# Get Azure Application Gateway
$appgw = Get-AzApplicationGateway -Name "appgw_name" -ResourceGroupName "rg_name"
 
# Stop the Azure Application Gateway
Stop-AzApplicationGateway -ApplicationGateway $appgw
 
# Start the Azure Application Gateway
Start-AzApplicationGateway -ApplicationGateway $appgw

After executing the stop, we will be able to see that the Operational State change after 1 minute or so:

and for the AzFirewall we can use the following:

$firewall=Get-AzFirewall -ResourceGroupName rgName -Name azFw
$firewall.Deallocate()
$firewall | Set-AzFirewall

$vnet = Get-AzVirtualNetwork -ResourceGroupName rgName -Name anotherVNetName
$pip = Get-AzPublicIpAddress -ResourceGroupName rgName -Name publicIpName
$firewall.Allocate($vnet, $pip)
$firewall | Set-AzFirewall

Now, you know how to save some money using those resources and I’m able to go to holidays to rest a while

Happy holidays!

Recommendations for deploying a Jump Host in Azure

Probably you’re asking yourself what’s a jump host? So in simple words, is a virtual host which is not the same as you use daily to read e-mail, browse the web, install software, but is used to perform administrative tasks for one or multiple IT infrastructures.

These are some of the recommendations that I follow when I need to deploy a jump host in Azure. The first two, are the most important, you have to be sure of not doing any of these

  • Do NOT install any productivity tools such as Office, it’s important to keep the VM as clean as possible, it’s only a considered to be a jump Host, not a working device.
  • Do NOT use this VM for general internet browsing purposes

and other some recommendations…

  • Isolate the VM with NSG, only is need to access where it is really needed
  • Install the AntiMalware extension from Azure and configure Windows Defender Settings
  • If possible, configure JIT on the VM
  • Onboard the device in Microsoft Defender for Endpoint (if Possible)
  • Apply the Microsoft Security baseline
  • Enable Windows Defender Network Protection and Exploit Guard
  • Enable Virtualization based security, if you deployed a Gen 2 VM

That’s all, as always, these are my recommendations, probably you have different ones

My password recommendations from the trenches

The following are recommendations and thoughts that I extracted by working with several customers, maybe you will find it obvious, but for other people could be useful. So, let’s begin:

In the identity plane, we could say that exists 2 categories:

  • Resist Common attacks
  • Contain successful attacks

I don’t want to enter of how to resist or contain attacks, because probably I covered some of these topics in other blog entries, but for me, there is another category which is: understand the human nature.

Nothing more that understand that almost every rule that we impose to the end users, result in degradation of security. Why? Because we force users to use long passwords, with special characters, and in the end, users tend to reuse passwords which makes easier to guess or crack passwords for malicious actors.

So, in the post I will resume some of my experiences as AntiPatterns and recommendations:

  • Antipattern – Requiring long passwords: excessive length passwords (more than 10 characters) can result in a behaviour predictable, users tend to choose repeating patterns (heyholetsgoheyholetsgo) that meet the character length but clearly not hard to guess. We can say that this kind of passwords are hard to guess but lead to poor behaviours to guess the password.
    • SuperPRO Tip: You can use a long password, but in this case what I recommend is something that engineers from Microsoft do. They use a very loooooooong password, they forget it, and instead of it, they use passwordless mechanisms such as Windows Hello to sign in.

My tip: Use minimum 8 length requirement but ban common passwords with Azure AD Password Protection.

  • Antipattern – Require use of multiple character sets: probably you’re not in the same line as me, but I’ve seen that this rule do more harm than good. People use patterns as substitutions such  as $ for s, @ for a, 1 for I. So keep it in mind
  • Antipattern – Password expiration: Policy expiration drive users to use very predictable password (for example, the next password can be predicted on the previous password), end users do not tend to use a new password, the tend to update the old one.

My tip for the two previous points: Azure AD Password Protection + Conditional Access based on User Identity

  • Recommendation – Ban common passwords: For me, the most important restriction is to ban the use of common password to reduce the possibility of brute force or password spray attacks

Tip: Look at my first tip 😊

  • Recommendation – Educate end Users not to use organization credentials anywhere else: Yes I know that educate users are difficult, but you have to do it, because the tend to reuse the same password across multiple sites. It is a common practice for cyber criminals to try compromised credentials across many sites.
  • Recommendation – Enforce MFA registration and enable MFA: ensure that users maintain their security information up to date, so they can respond to security challenges if needed. Doing this, I have seen that end users are more implicated concerning digital security

Enabling MFA prevents up to 99.9% of identity attacks, and if we use other controls such as user location, the better.

PRO TIP: Use Conditional access with FIDO2 security key (PassWordless Authentication with Fido 2 Keys – Albandrod’s Memory (albandrodsmemory.com))

EndUser TIP: Consider turning on two-step verification everywhere you can

  • Recommendation – Enable risk-based Authentication: when the system detects suspicious activity, it challenges the user to ensure that they are the legitimate account owner. Personally, I think that this feature is great, but the only drawback that it is only included with AAD P2

Probably you will have different ones based on your experience but these are my recommendations. Till next time and stay safe!

First Impressions about Azure sFTP

SSH File Transfer Protocol is a very common protocol used by many customers for secured file transfer over a secure shell. Microsoft did not have a fully managed SFTP service in Azure, but now is it possible to do it with Azure Blob Storage.

So, you will be able to use an SFTP client to connect to that storage account and manage the objects inside and even specify permissions for each user.

But before beginning, you will need to register the SFTP feature in your subscription, to do that you have to type the following:

# Set the Azure context for the desired subscription
az account set --subscription "xxxx-xxxx-xxxx-xxxx"

# Check if the live tier feature is registered first
az feature show --namespace Microsoft.Storage --name AllowSFTP

# Register the live tier feature on your subscription
az feature register --namespace Microsoft.Storage --name AllowSFTP

Also, you can check that information in Preview Features option in the Azure Portal:

Once you have that, you will need to enable the hierarchical namespace in the storage account, note that you can’t enable that on an existing storage account…

BEFORE

AFTER

At the time of writing, I couldn’t create the FTPS service through the Azure Portal or event with template, when I select WestEurope as destination, in the future I’m sure that would be supported

Now, we can deploy the ARM template to the RG previously created in Azure, but, previously to do that, you have to decide if your user will connect through Password or a SSH Key. In my case, I decided to implement it with an ARM Template with SSH key, but first you need to generate an SSH key pair:

next I provide the two ARM templates for both types of implementation

Template FOR PASSWORD Implementation:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "storageAccountType": {
      "type": "string",
      "defaultValue": "Standard_LRS",
      "allowedValues": ["Standard_LRS", "Standard_ZRS"],
      "metadata": { "description": "Storage Account type" }
    },
    "location": {
      "type": "string",
      "defaultValue": "northeurope",
      "allowedValues": ["westeurope", "northcentralus", "eastus2", "eastus2euap", "centralus", "canadaeast", "canadacentral", "northeurope", "australiaeast", "switzerlandnorth", "germanywestcentral", "eastasia", "francecentral"],
      "metadata": { "description": "Region" }
    },
    "storageAccountName": {
      "type": "string",
      "metadata": { "description": "Storage Account Name" }
    },
    "userName": {
      "type": "string",
      "metadata": { "description": "Username of primary user" }
    },
    "homeDirectory": {
      "type": "string",
      "metadata": { "description": "Home directory of primary user. Should be a container." }
    }
  },
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2021-02-01",
      "name": "[parameters('storageAccountName')]",
      "location": "[parameters('location')]",
      "sku": {
        "name": "[parameters('storageAccountType')]"
      },
      "kind": "StorageV2",
      "properties": {
          "isHnsEnabled": true,
          "isSftpEnabled": true
      },
      "resources": [
        {
          "type": "blobServices/containers",
          "apiVersion": "2021-02-01",
          "name": "[concat('default/', parameters('homeDirectory'))]",
          "dependsOn": ["[parameters('storageAccountName')]"],
          "properties": {
            "publicAccess": "None"
          }
        },
        {
          "type": "localUsers",
          "apiVersion": "2021-02-01",
          "name": "[parameters('userName')]",
          "properties": {
            "permissionScopes": [
                {
                  "permissions": "rcwdl",
                  "service": "blob",
                  "resourceName": "[parameters('homeDirectory')]"
                }
            ],
            "homeDirectory": "[parameters('homeDirectory')]",
            "hasSharedKey": false
          },
          "dependsOn": ["[parameters('storageAccountName')]"]
        }
      ]
    }
  ],
  "outputs": {
    "defaultContainer": {
      "type": "string",
      "value": "[parameters('homeDirectory')]"
    },
    "user": {
      "type": "object",
      "value": "[reference(
        resourceId('Microsoft.Storage/storageAccounts/localUsers', parameters('storageAccountName'), parameters('userName'))
      )]"
    }
  }
}

Template for SSH Implementation

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "storageAccountType": {
      "type": "string",
      "defaultValue": "Standard_LRS",
      "allowedValues": ["Standard_LRS", "Standard_ZRS"],
      "metadata": { "description": "Storage Account type" }
    },
    "location": {
      "type": "string",
      "defaultValue": "northeurope",
      "allowedValues": ["westeurope", "northcentralus", "eastus2", "eastus2euap", "centralus", "canadaeast", "canadacentral", "northeurope", "australiaeast", "switzerlandnorth", "germanywestcentral", "eastasia", "francecentral"],
      "metadata": { "description": "Region" }
    },
    "storageAccountName": {
      "type": "string",
      "metadata": { "description": "Storage Account Name" }
    },
    "userName": {
      "type": "string",
      "metadata": { "description": "Username of primary user" }
    },
    "homeDirectory": {
      "type": "string",
      "metadata": { "description": "Home directory of primary user. Should be a container." }
    },
    "publicKey": {
      "type": "string",
      "metadata": { "description": "SSH Public Key for primary user." }
    }
  },
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2019-06-01",
      "name": "[parameters('storageAccountName')]",
      "location": "[parameters('location')]",
      "sku": {
        "name": "[parameters('storageAccountType')]"
      },
      "kind": "StorageV2",
      "properties": {
          "isHnsEnabled": true,
          "isLocalUserEnabled": true,
          "isSftpEnabled": true
      },
      "resources": [
        {
          "type": "blobServices/containers",
          "apiVersion": "2019-06-01",
          "name": "[concat('default/', parameters('homeDirectory'))]",
          "dependsOn": ["[parameters('storageAccountName')]"],
          "properties": {
            "publicAccess": "None"
          }
        },
        {
          "type": "localUsers",
          "apiVersion": "2019-06-01",
          "name": "[parameters('userName')]",
          "properties": {
            "permissionScopes": [
                {
                  "permissions": "rcwdl",
                  "service": "blob",
                  "resourceName": "[parameters('homeDirectory')]"
                }
            ],
            "homeDirectory": "[parameters('homeDirectory')]",
            "sshAuthorizedKeys": [
              {
                "description": "localuser public key",
                "key": "[parameters('publicKey')]"
              }
            ],
            "hasSharedKey": false
          },
          "dependsOn": ["[parameters('storageAccountName')]"]
        }
      ]
    }
  ],
  "outputs": {
    "defaultContainer": {
      "type": "string",
      "value": "[parameters('homeDirectory')]"
    },
    "user": {
      "type": "object",
      "value": "[reference(
        resourceId('Microsoft.Storage/storageAccounts/localUsers', parameters('storageAccountName'), parameters('userName'))
      )]"
    },

    "keys": {
      "type": "object",
      "value": "[listKeys(resourceId('Microsoft.Storage/storageAccounts/localUsers', parameters('storageAccountName'), parameters('userName')), '2019-06-01')]"
    }
  }
}

Once you have deployed the template, you can go to the portal to configure the user permission:

Remember to keep the password, without that you can’t be able to connect to the SFTP

And now, you can connect to the SFTP via PS or other preferred tool

And play with some of the files:

We can check the blob itself to review the information about the recent uploads:

As you have seen, now you’re able to deploy your SFTP for Azure Blob Storage without worrying about Container Solutions or other weird experiments.

Till next time!

PassWordless Authentication with Fido 2 Keys

This is something I wanted to test some time ago, and now thanks to Feitian I was able to do it. So let’s dig into detail what is passwordless with Fido2 Keys, how we can configure it in AzureAD, and what advantages provide as an end user. ¡Let’s begin!

But before dig in depper, let me explain the basics: A security key is a piece of hardware that you can connect to your computer or phone to verify your credentials when logging, unlike a password, it’s completely safe, because the configuration is different for each system.

So, what does Fido2 Keys? As you probably know, logging into a resource requires a username and password, and with MFA, it usually requires a username/password combination plus one other authentication factor, like a time-based one-time password. In this case, FIDO2 is a standards-based method of user authentication that is passwordless, supporting PIN and biometrics in security tokens

For starters, with FIDO you can:

  • Improve security with crypto-secured passwordless authentication
  • Remove the helpdesk costs associated with forgotten passwords by replacing them with a simple PIN or fingerprint
  • Remove the user-experience annoyances of long passwords to create, remember and reset so that your workforce can get on with their role simply and seamlessly.

What about the preparation of AzureAD?

For IT, At high level there is only two tasks to accomplish:

  • Enable the new authentication method registration on AzureAD
  • Enable FIDO2 as an authentication method

Easy, isn’t it?

What about the registration for end users?

In my case, how the security Key is a biometic security Key, what i needed to do first is to register my fingerprint. Once I did this (manufacturer provide details, you’re ready to go with next steps).

In order to register the security token with AzureAD, the user will need to access to https://aka.ms/setupsecurityinfo where will be able to see all the authentication method available for them:

And once the user have selected the security key option, the process of registration will begin. In my case, I selected USB device and then… I needed to provide a PIN for the security Key:

Things that you have to keep in mind, is we user have to set up their own PIN to use their key, it cannot be enforced or centralized way to manage PIN, so is probably that your users end up using PINs like 123456.

ONce you have registered the key, it will appear in the security Info Panel:

Ok, it’s great what you’re are explaining, but how it is used?

With the following video, I want to show how the process of passwordless authentication in AzureAD is done:

As you can see, the login was done without entering any user or password. If you’re conviced, and you want to start deploying Fido2 Keys in your organization, think first about the following points:

Registration

  • Control to ensure that the employee has been through sufficient identity checks to create a trusted identity.

Issuance

The organisation needs policy control over:

  • The type of FIDO device used (external USB / Bluetooth)
  • The organisation needs to consider the type of user verification required (Fingerprint / NFC)
  • The end user needs a simple experience during registration of a FIDO credential
  • The organization needs to trust the genuineness of the FIDO device being used for the FIDO credential

Lifecycle Management

  • Vision of who has been assigned which FIDO Credentials
  • Ability to simply revoke access to all systems accessed by the FIDO Credential
  • Ability to manage lost devices / replacement devices / back up devices

Authentication

  • The end user needs a simple experience to authenticate to systems, usernameless aids this process.

As you can see Fido2 Keys are great, and what is better, not only works with AzureAD, it can be used to authenticate with oter services like twitter, Instagram, etc…

Link References:

Register your key at https://aka.ms/mysecurityinfo

If you are a Microsoft 365 admin, use an interactive guide at https://aka.ms/passwordlesswizard