This is a multi part post - you can find all related posts here:
The Function App is written in PowerShell and therefore is kind of slow for this purpose. However, the amount of request is so low, it does not matter for this case. Unless you deploy 100 AWS accounts a minute, you will be fine đ
The code can be found here.
The API provides four endpoints:
Below, you will find examples for each endpoint.
List root-users by either:
1 | $uri = 'https://<function_name>.azurewebsites.net/api/getAwsRootAccount?code=<auth code>&aws_account_id=12345' |
1 | $uri = 'https://<function_name>.azurewebsites.net/api/newAwsRootAccount?code=<auth code>' |
1 | $uri = 'https://<function_name>.azurewebsites.net/api/updateAwsRootAccount?code=<auth code>' |
1 | $uri = 'https://<function_name>.azurewebsites.net/api/deleteAwsRootAccount?code=<auth code>' |
Some of you might have already spotted it, there are some pipelines includes in the repo in the .azuredevops folder. They are written for Azure Pipelines, and I would suggest you give them a go.
If you want to learn more about those pipelines, I would suggest a previous post, they are all explained in further detail over there đ
With this, your AWS team can manage all e-mail related tasks their own and your IT department has no worries for this. And If you run out of aliases, just create another shared mailbox using the script and you are good to go.
]]>This is an quick overview for some tool you should check out, when you want to work properly with Terraform.
Terraform has sparked an entire ecosystem of other tools to make your life better. Terraform itself has some testing integrated, even if its not the greatest, better than nothing, right? terraform fmt formats your code, terraform validate checks for invalid configurations. If you want to do more, you have come to the right place.
You can find the pipeline code in the Repo terraform-pipelines on my GitHub account. All pipelines are written for Azure Pipelines and with a heavy use of templating.
The pipelines are written for Azure Pipelines, and they have some prerequisites.
All files related to pipelines are stored in the repo root in the folder .azuredevops. In there are two folders:
The bread and butter - a pipeline to deploy your terraform cofiiguration.
It consists of two stages - the first creates a terraform plan and checks, if changes were made. If changes were made, the second stage - terraform apply - will run, otherwise, it will be skipped and the run is complete
To automatically create a docs.md for the root module as well as any other module, this pipeline uses terraform-docs to commit the documentation directly during the pipeline run. This pipeline is meant to be used as a build-validation pipeline for pull requests.
Formats the terraform code according to best practices. Commits changes directly to the current branch. Meant to be used as a build validation pipeline in your pull requests. Official Docs here.
This is where the magic happens, this pipeline runs several tests against your terraform code and publishes the results in junit format to the pipeline. If there are any errors, the run fails.
This is not all, there are many tools for Terraform, but some of my favorites, nevertheless. Its a great way to perform some level of automated testing as well as creation of documentation.
]]>This is a multi part post - you can find all related posts here:
Each AWS-accounts root-usersâ e-mail address must be unique, therefore, we create several shared mailboxes with a lot of aliases as shown below:
In this example, we will have 11 shared mailboxes, one main mailbox called âaws@company.comâ and ten mailboxes with 300 mail aliases each. The ten mailboxes forward all mails to âaws@company.comâ. Thus, all mails com together at a central mailbox. The AWS administrators get access to the shared mailbox âaws@company.comâ and therefore have access to all root accounts. Each AWS accounts root-users e-mail will be configured with mail-alias. All mail-aliases and their root-mailbox info will be stored in a table of an Azure Table storage. We will go into more detail in the next and final post.
You can find the initial setup in the repository.
You need to setup the Azure resources first, otherwise the data will not be written to the table storage. You will find a Terraform deployment within the repo.
1 | $subscription_id = "" |
The third and final post will go over the API for the AWS root-user management.
]]>This is a multi part post - you can find all related posts here:
I used Azure Active Directory as the central IAM, but this topic is valid for all IAM solutions and even just a standalone AWS deployments. However, this post will not cover the actual setup of SSO and SCIM for AWS IAM Identity Center using Azure AD. You can find the official SSO and SCIM setup documentation here.
All terms in regards to AWS components like accounts, root-users, IAM users and so an will be quiet confusing at first, but it will get better with little time.
Depending on how your organization managing e-mail addresses, setting up and managing hundreds or even thousands of e-mail accounts can cause some major issues in your IT department. Most companies have a central e-mail solution and technical users have one purpose and one purpose only, even if they only used for sending mails.
Some possible problems:
Some obvious Challenge, or I would rather say inconveniences emerge regardless:
The main goal of this series is to setup the management of centrally managed e-mail addresses for the root-users.
Looking at the AWS account structure (below), you will see, that each AWS account, even the root account, requires a root user.
On the other hand, IAM users must not be unique and also can be created using SCIM from your central identity provider (an Azure AD for instance). They can even be created by SCIM as IAM users in the AWS root account and then be added as IAM Users to child AWS accounts.
The diagram below shows this
The solution has three main components:
The figure below shows the architecture but I will cover all of the components in the next two posts.
Unfortunately, its not straight forward, to get list of all Azure AD role assignments, unless you are not using Privileged Identity Management (PIM).
First, we need the Microsoft Graph PowerShell SDK. Follow these steps.
Currently, to retrieve eligible, its required to set the Microsoft Graph profile to beta. Also, those information can only be queried using the Windpws PowerShell.
The gist can either be found here or explained in detail below.
1 | Connect-MgGraph -Scopes RoleEligibilitySchedule.Read.Directory, RoleAssignmentSchedule.Read.Directory, CrossTenantInformation.ReadBasic.All, AuditLog.Read.All, User.Read.All |
I hope this makes your life a little simpler đ
]]>Using the dynamic-block can be a little much at first, because its an advanced topic. However, you have gotten use to it, its a blast to work with.
As shown in the Azure Vnet resource shown below, each subnet requires its own subnet block.
1 | resource "azurerm_virtual_network" "vnet" { |
Using the dynamic-block, you can write a module to create, in this case Azure Virtual Networks (Vnet) including their subnet config and provide the subnets as a list.
1 | # variables |
So, how does this work exactly?
As shown above, instead of providing the block name for the subnet, we added a dynamic-block named subnet. The naming of the dynamic block is important, as it must be named like the block the resouce expects. In this case: subnet. Each dynamic-block has a for_each statement. This is the list of blocks you want to create, typically provided as a list in form of a variable. Last, the actual properties are provided in the content block. You address each property using the following notation: <name of block>.value.<name of property> - in this case, subnet.value.name and so on.
It helps to provide those information in form of a variable as shown above and to put this resource into a module.
You can even nest several dynamic block within each other. One example for this would be the Azure Firewall Rule Collection Group (hate this nameâŚ).
1 | # variables |
As you can see in the application_rule_collection setion, there are three nested level of the dynmic-block and there is no limit on how many you can nest together.
I did put together a repo with an example deployment, you can find it here.
I hope this was helpful!
]]>Best practice is to create and use dedicated administrative accounts to manage Azure and Microsoft 365. These accounts should be authorized by an RBAC concept and PIM (Privileged Identity Management) and should not have a mailbox (Exchange Online) license to minimize the attack surface.
However, there is a requirement that the notification e.g. PIM or other alerts must be sent to the user.
To implement this, you can use the Exchange format â+â (plus addresses) to implement this requirement.
The following example shows the functionality and configuration of the feature.
Our IT employee Alex Wilber âAlexW@M365x57.OnMicrosoft.comâ has a user account in the company with a corresponding Microsoft 365 license and a mailbox.
Furthermore our IT employee Alex Wilber has another Azure AD Admin Account âadm.AlexW@m365x57487439.onmicrosoft.comâ.
This admin user âadm.AlexW@m365x57487439â has no licenses assigned as described, so no mailbox is provided. Also, in this example, the âGlobal Administratorâ role was assigned to the user via PIM.
To forward the notification from our admin account âadm.AlexW@m365x57487439.onmicrosoft.comâ to our user primary mailbox âAlexW@M365x57487439.OnMicrosoft.comâ, we configure the admin account in Azure AD as below.
Open the user administration in Azure AD and edit the corresponding admin user. If you try to add the email address of your default user (âAlexW@M365x57.OnMicrosoft.comâ), you will get an error message (âUpdate would cause the user to have a proxy address already present on another directory object.â).
At this point the email format plus addresses is used. Extend your email address to which the mails will be forwarded with for example â+ADMâ.
Email Admin Account: âAlexW+ADM@M365x57487439.OnMicrosoft.comâ
Exchange Online resolves the email address âAlexW+ADM@M365x57487439.OnMicrosoft.comâ without the â+â and associated tag (â+ADMâ) so that the notification is sent to AlexW@M365x57487439.OnMicrosoft.com.
If we then enable the PIM role Global Administrator of the admin account âadm.AlexW@m365x57487439.onmicrosoft.comâ, we will receive the notification in our user mailbox.
In the past, it was possible for email addresses to contain â+â characters. But Microsoft has enabled plus addressing by default in all Exchange Online organization at the beginning of 2022.
This configuration can be checked using PowerShell and customized as described below.
1 |
|
For this purpose, I have collected the relevant ports and URLs for Defender for Endpoint, Microsoft Defender Antivirus, Azure Arc Agent, Microsoft Defender SmartScreen, Azure Monitor Agent in the table below.
Usage | Region | Subcategory | Port | Url |
---|---|---|---|---|
Microsoft Defender for Endpoint | WW | CRL | 80 | crl.microsoft.com |
Microsoft Defender for Endpoint | WW | CRL | 80 | ctldl.windowsupdate.com |
Microsoft Defender for Endpoint | WW | CRL | 80 | www.microsoft.com/pkiops/* |
Microsoft Defender for Endpoint | WW | CRL | 80 | www.microsoft.com/pki/* |
Microsoft Defender for Endpoint | WW | Common | 443 | events.data.microsoft.com |
Microsoft Defender for Endpoint | WW | Common | 443 | *.wns.windows.com |
Microsoft Defender for Endpoint | WW | Common | 443 | login.microsoftonline.com |
Microsoft Defender for Endpoint | WW | Common | 443 | login.live.com |
Microsoft Defender for Endpoint | WW | Common | 443 | settings-win.data.microsoft.com |
Microsoft Defender for Endpoint | WW | Common (Mac/Linux) | 443 | x.cp.wd.microsoft.com |
Microsoft Defender for Endpoint | WW | Common (Mac/Linux) | 443 | cdn.x.cp.wd.microsoft.com |
Microsoft Defender for Endpoint | WW | Common (Mac/Linux) | 443 | officecdn-microsoft-com.akamaized.net |
Microsoft Defender for Endpoint | WW | Common (Linux) | 443 | packages.microsoft.com |
Microsoft Defender for Endpoint | WW | Microsoft Defender for Endpoint | 443 | login.windows.net |
Microsoft Defender for Endpoint | WW | Microsoft Defender for Endpoint | 443 | *.security.microsoft.com |
Microsoft Defender for Endpoint | WW | Microsoft Defender for Endpoint | 443 | .blob.core.windows.net/networkscannerstable/ |
Microsoft Defender for Endpoint | WW | Security Management | 443 | enterpriseregistration.windows.net |
Microsoft Defender for Endpoint | WW | Security Management | 443 | *.dm.microsoft.com |
Microsoft Defender for Endpoint | WW | Microsoft Monitoring Agent (MMA) | 443 | *.ods.opinsights.azure.com |
Microsoft Defender for Endpoint | WW | Microsoft Monitoring Agent (MMA) | 443 | *.oms.opinsights.azure.com |
Microsoft Defender for Endpoint | WW | Microsoft Monitoring Agent (MMA) | 443 | *.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | unitedstates.x.cp.wd.microsoft.com |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | us.vortex-win.data.microsoft.com |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | us-v20.events.data.microsoft.com |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | winatp-gw-cus.microsoft.com |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | winatp-gw-eus.microsoft.com |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | winatp-gw-cus3.microsoft.com |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | winatp-gw-eus3.microsoft.com |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | automatedirstrprdcus.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | automatedirstrprdeus.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | automatedirstrprdcus3.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | automatedirstrprdeus3.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | ussus1eastprod.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | ussus2eastprod.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | ussus3eastprod.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | ussus4eastprod.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | wsus1eastprod.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | wsus2eastprod.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | ussus1westprod.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | ussus2westprod.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | ussus3westprod.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | ussus4westprod.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | wsus1westprod.blob.core.windows.net |
Microsoft Defender for Endpoint | US | Microsoft Defender for Endpoint US | 443 | wsus2westprod.blob.core.windows.net |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | europe.x.cp.wd.microsoft.com |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | eu.vortex-win.data.microsoft.com |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | eu-v20.events.data.microsoft.com |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | winatp-gw-neu.microsoft.com |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | winatp-gw-weu.microsoft.com |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | winatp-gw-neu3.microsoft.com |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | winatp-gw-weu3.microsoft.com |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | automatedirstrprdneu.blob.core.windows.net |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | automatedirstrprdweu.blob.core.windows.net |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | automatedirstrprdneu3.blob.core.windows.net |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | automatedirstrprdweu3.blob.core.windows.net |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | usseu1northprod.blob.core.windows.net |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | wseu1northprod.blob.core.windows.net |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | usseu1westprod.blob.core.windows.net |
Microsoft Defender for Endpoint | EU | Microsoft Defender for Endpoint EU | 443 | wseu1westprod.blob.core.windows.net |
Microsoft Defender for Endpoint | UK | Microsoft Defender for Endpoint UK | 443 | unitedkingdom.x.cp.wd.microsoft.com |
Microsoft Defender for Endpoint | UK | Microsoft Defender for Endpoint UK | 443 | uk.vortex-win.data.microsoft.com |
Microsoft Defender for Endpoint | UK | Microsoft Defender for Endpoint UK | 443 | uk-v20.events.data.microsoft.com |
Microsoft Defender for Endpoint | UK | Microsoft Defender for Endpoint UK | 443 | winatp-gw-uks.microsoft.com |
Microsoft Defender for Endpoint | UK | Microsoft Defender for Endpoint UK | 443 | winatp-gw-ukw.microsoft.com |
Microsoft Defender for Endpoint | UK | Microsoft Defender for Endpoint UK | 443 | automatedirstrprduks.blob.core.windows.net |
Microsoft Defender for Endpoint | UK | Microsoft Defender for Endpoint UK | 443 | automatedirstrprdukw.blob.core.windows.net |
Microsoft Defender for Endpoint | UK | Microsoft Defender for Endpoint UK | 443 | ussuk1southprod.blob.core.windows.net |
Microsoft Defender for Endpoint | UK | Microsoft Defender for Endpoint UK | 443 | wsuk1southprod.blob.core.windows.net |
Microsoft Defender for Endpoint | UK | Microsoft Defender for Endpoint UK | 443 | ussuk1westprod.blob.core.windows.net |
Microsoft Defender for Endpoint | UK | Microsoft Defender for Endpoint UK | 443 | wsuk1westprod.blob.core.windows.net |
Microsoft Defender Antivirus | WW | UTC | 443 | vortex-win.data.microsoft.com |
Microsoft Defender Antivirus | WW | MU / WU | 443 | *.update.microsoft.com |
Microsoft Defender Antivirus | WW | MU / WU | 443 | *.delivery.mp.microsoft.com |
Microsoft Defender Antivirus | WW | MU / WU | 443 | *.windowsupdate.com |
Microsoft Defender Antivirus | WW | MU / WU | 443 | go.microsoft.com |
Microsoft Defender Antivirus | WW | MU / WU | 443 | definitionupdates.microsoft.com |
Microsoft Defender Antivirus | WW | MU / WU | 443 | https://www.microsoft.com/security/encyclopedia/adlpackages.aspx |
Microsoft Defender Antivirus | WW | MU (ADL) | 443 | *.download.windowsupdate.com |
Microsoft Defender Antivirus | WW | MU (ADL) | 443 | *.download.microsoft.com |
Microsoft Defender Antivirus | WW | MU (ADL) | 443 | fe3cr.delivery.mp.microsoft.com/ClientWebService/client.asmx |
Microsoft Defender Antivirus | WW | Symbols | 443 | https://msdl.microsoft.com/download/symbols |
Microsoft Defender Antivirus | WW | MAPS | 443 | *.wdcp.microsoft.com |
Microsoft Defender Antivirus | WW | MAPS | 443 | *.wd.microsoft.com |
Microsoft Defender SmartScreen | WW | Reporting and Notifications | 443 | *.smartscreen-prod.microsoft.com |
Microsoft Defender SmartScreen | WW | Reporting and Notifications | 443 | *.smartscreen.microsoft.com |
Microsoft Defender SmartScreen | WW | Reporting and Notifications | 443 | *.checkappexec.microsoft.com |
Microsoft Defender SmartScreen | WW | Reporting and Notifications | 443 | *.urs.microsoft.com |
Azure Arc Agent | WW | Used to resolve the download script during installation | aka.ms | |
Azure Arc Agent | WW | Used to download the Windows installation package | download.microsoft.com | |
Azure Arc Agent | WW | Used to download the Linux installation package | packages.microsoft.com | |
Azure Arc Agent | WW | Azure Active Directory | login.windows.net | |
Azure Arc Agent | WW | Azure Active Directory | login.microsoftonline.com | |
Azure Arc Agent | WW | Azure Active Directory | pas.windows.net | |
Azure Arc Agent | WW | Azure Resource Manager - to create or delete the Arc server resource | management.azure.com | |
Azure Arc Agent | WW | Metadata and hybrid identity services | *.his.arc.azure.com | |
Azure Arc Agent | WW | Extension management and guest configuration services | *.guestconfiguration.azure.com | |
Azure Arc Agent | WW | Notification service for extension and connectivity scenarios | guestnotificationservice.azure.com, *.guestnotificationservice.azure.com | |
Azure Arc Agent | WW | Notification service for extension and connectivity scenarios | azgn*.servicebus.windows.net | |
Azure Arc Agent | WW | For Windows Admin Center and SSH scenarios | *.servicebus.windows.net | |
Azure Arc Agent | WW | Download source for Azure Arc-enabled servers extensions | *.blob.core.windows.net | |
Azure Arc Agent | WW | Agent telemetry | dc.services.visualstudio.com | |
Log Analytics Agent/Microsoft Monitoring Agent | WW | 443 | *.ods.opinsights.azure.com | |
Log Analytics Agent/Microsoft Monitoring Agent | WW | 443 | *.oms.opinsights.azure.com | |
Log Analytics Agent/Microsoft Monitoring Agent | WW | 443 | *.blob.core.windows.net | |
Log Analytics Agent/Microsoft Monitoring Agent | WW | 443 | *.azure-automation.net | |
Azure Monitor Agent | WW | Access control service | 443 | global.handler.control.monitor.azure.com |
Azure Monitor Agent | WW | Fetch data collection rules for specific machine | 443 | *.handler.control.monitor.azure.com |
This change of the mailbox language can also occur during migration. For example, this problem can happen during a tenant-to-tenant migration of Exchange Online, so it is recommended to check and modify the mailbox language after a successful mailbox migration.
The customization of the mailbox language can only be done using PowerShell. After the modification with the command Set-MailboxRegionalConfiguration the set language is automatically displayed in Outlook.
If this does not happen immediately, the update can be forced on the client via Windows Run and start Outlook with the following parameter: âoutlook /resetfoldernameâ.
To change the mailbox langauge, the following PowerShell command is used:
1 |
|
For setting the mailbox configuration the identity is needed and to specify which object should be changed. For Identity not only the name can be used, the following values are available to identify the mailbox.
The Date Format can be set manually or to $Null ,then the default settings for will be used.
1 |
|
1 | -LocalizeDefaultFolderName |
For the languages the corresponding language tags are used. I have collected an extract from the possible language tags below.
Language | Geographic area | Language tag |
---|---|---|
Arabic | Saudi Arabia | ar-SA |
Bulgarian | Bulgaria | bg-BG |
Chinese (Simplified) | Peopleâs Republic of China | zh-CN |
Chinese | Taiwan | zh-TW |
Croatian | Croatia | hr-HR |
Czech | Czech Republic | cs-CZ |
Danish | Denmark | da-DK |
Dutch | Netherlands | nl-NL |
English | United States | en-US |
Finnish | Finland | fi-FI |
French | France | fr-FR |
German | Germany | de-DE |
Greek | Greece | el-GR |
Hebrew | Israel | he-IL |
Hindi | India | hi-IN |
Hungarian | Hungary | hu-HU |
Indonesian | Indonesia | id-ID |
Italian | Italy | it-IT |
Japanese | Japan | ja-JP |
Korean | Korea | ko-KR |
Latvian | Latvia | lv-LV |
Lithuanian | Lithuania | lt-LT |
Malay | Malaysia | ms-MY |
Norwegian (BokmĂĽl) | Norway | nb-NO |
Polish | Poland | pl-PL |
Portuguese | Brazil | pt-BR |
Portuguese | Portugal | pt-PT |
Romanian | Romania | ro-RO |
Slovak | Slovakia | sk-SK |
Slovenian | Slovenia | sl-SI |
Spanish | Spain | es-ES |
Swedish | Sweden | sv-SE |
Thai | Thailand | th-TH |
In a lot of migration scenarios you have a large number of users that you want to check or customize. To do this, you can import the data based on a user list (csv), and let the users be customized via a PowerShell Loop to modify the users.
1 |
|
If we simply set a mail to the people, person (multi-select) field via Power Automate Flow, unfortunately this does not work, because the Outlook Connector does not support it, and only expects email addresses with semicolon separated notation.
To realize this, some intermediate steps are necessary by selecting the email addresses from the Person (multiselect) Field and formatting them for sending mail notifications.
1 | "Responsible": [ |
Item()['Email']
is added to access the email addresses of the users. The formatting of the email addresses is complete and now can be used for the Outlook connector, for example to inform users about the status of SharePoint list items.
This is a multi part blog post with the following parts:
Microsoft 365 contains a large number of services, so it is important to define at this point what is to be migrated, because this will have an impact on the runtime, complexity and selection of the migration tool. In some scenarios, companies have only been using Exchange Online up to now and in others Outlook Teams, Planner, Groups, etc. are being used.
Based on my experience, this planning should also be coordinated with the business responsible persons in the company and communicated to the employees. In the process of migration / integration, IT already has a large number of tasks and missing data or information in the migration process can result in employeesâ displeasure and additional workload for IT.
The first step is to identify which services are used in the tenant to be migrated in order to determine whether and to what scope of data and settings need to be migrated.
After the evaluation and analysis of the use of the listed Office 365 services, a first estimate of time and effort is now possible. Depending on the service used, a matrix can be created in the following step and it can be defined whether data migration is necessary for the service used or whether user communication is also necessary.
There is currently no tool on the market that can migrate all services 1-to-1 without any problems. Most third-party tools specialize in core services and offer comprehensive and good migration options.
Core Services are usually Exchange Online (Mailboxes, Settings, ContactsâŚ.), SharePoint / Teams (Sites, Groups, Planner, Team & Channel). Tools such as Quest on Demand offer a sufficient range of functions for this.
However, other services should not be forgotten during migration planning, such as PowerBI, Forms, Stream, Bookings, etc., which can cause significant problems in the business after a tenant migration. Therefore, a detailed definition of the functional scope and migration scope per service is recommended in most cases. In the following blog posts I will describe the different services like Exchane Online and show migration scenarios and tools.
]]>Due to service changes (new feature or deactivation ) the list may not match the current services. Please check the completeness of the Office 365 Services before use.
During step-by-step implementation, SharePoint lists or M365 lists are generally created and the automation options integrated until the application meets the specified requirements and can be put into production. At this point, the existing data from Excel must mostly be transferred to the SharePoint Online or Microsoft Lists.
Some ways to migrate the data from the Excel to the SharePoint Online list are described in the following sections.
A simple option is to copy and paste the data into the SharePoint Online list.
To do this, the list must be opened in SharePoint and opened in Quick Edit Mode / Grid View. Then simply copy the data (without header) from the Excel file and copy it into the first row in the SharePoint Online list.
Another possibility is to create a new list via the SharePoint Online / Microsoft Lists Portal and select âFrom Excelâ in the menu during creation or upload the file alternatively within Excel upload to SharePoint. However, the disadvantage here is that a new list is created and possibly a prepared and customized SharePoint Online list cannot be used.
Furthermore, Power Automate (Microsoft Flow) can be used to transfer Excel lists / tables into SharePoint Online lists. For this purpose, you can use the connectors Excel Online Business and SharePoint available in Power Automate.
In other words, you check and update the existing Excel spreadsheet and save it to OneDrive for Business or SharePoint Online, so that the Power Automate workflow can access it.
In the next step, you create a new Power Automate Flow and use, for example, a manual âInstant Cloud Flowâ and add the connector Excel Online Business. As action you use the function âList rows present in a tableâ and select the Excel file to be used including the table.
To import a table into a SharePoint Online Power Automate, the table in the Excel file must also be formatted as a table. First make sure that the table is correctly formatted as shown in the following screenshot.
In the next task, go through the columns step by step and analyze the correct formatting and assignment, because for some formatting it is not so easy to apply fields like âTrue / Falseâ.
For example, if we have in the SharePoint Online List we have a field of type âYes/Noâ and we want to take the values from the Excel file (Filed1) and use the column mapping for this we will run an error on execution, so we have to work with a condition for this.
For this you create a condition within the flow and configure it with your True /False (Field1) from Excel and set the condition (âField1â is equal to âTrueâ). In the next step you create the Create Item SharePoint action and configure the fields If Yes âField1=Yesâ and If no âField1=Noâ.
With this possibility you can transfer the existing True / False fields directly into SharePoint Online fields and can use the functions in the cloud.
Another challenge is that fields of the type âNumberâ are not selectable and cannot be assigned directly.
To import them anyway you can use an expression and define the field by yourself (Field4).
1 | items('Apply_to_each')?['Field4'] |
After adding and executing the flow, values from the Type Number are now also correctly imported into the SharePoint list.
This short post is to give an overview about the import of SharePoint lists and data from Excel. There are of course other possibilities some third party tools that can be used to import Excel data. Taking a look at the existing tools like Power Automate canât hurt, as this already meets the requirements in many cases.
]]>This is a multi part article with the following parts:
This section is about the advanced view of migration scenarios, as there are many more indicators that have an impact on tenant migration. If we take a look at the topic of Office 365 tenant migration, in most cases there is an entire hybrid infrastructure. In this context of hybrid infrastructure there are key components like identity management, device management, exchange hybrid, data management & application management.
These topics are individual sub-projects and should be analyzed, reviewed, and considered based on the strategy established in Office 365 Tenant to Tenant Migration Fundamentals Part 1.
Letâs go to the first point and one of the most important in my mind âIdentity Managementâ.
If identity management is not planned and implemented correctly, it is not possible to ensure efficient operations.
The first question that arises is how do the existing Active Directory & Azure Active Directory structures look like.
If, as described in the first scenario, only Azure AD users, groups and devices are used, the migration in the identity part is relative simple, because only users have to be created in the target tenant and there are no significant links / synchronizations from on-premises systems.
In the Hybrid Identity scenario, migration can become much more complex as there are a number of dependencies that need to be considered.
First of all, as described in Part 1 of the blog series, the target planning and the future strategic IT infrastructure should be completed.
The following options can be considered and are eligible for Identity / Tenant Migration.
One option is to transfer users to the target Azure Active Directory and create them as Cloud Only accounts in the Azure Active Directory of the target tenant. Furthermore, to include / reset the devices in Intune / Auto-Pilot and therefore create a Cloud Only infrastructure.
Advantages | Disadvantages |
---|---|
Cloud Only Management | Separate double / management of identities |
Azure AD Integration & Intune Management | No centralized mangaement |
Minimization of complexity | Access restrictions to legacy systems |
Legacy systems replacement | LDAP / Kerberos no longer usable |
Cost savings due to elimination of local identity systems | Application access and identity synchronization |
Active Directory integration not available | |
Software Deployment |
Another option is to create the users & groups in the target environment in the local Active Directory and then provision them in the Azure Active Directory using AD Connect. This does not allow a complete but mostly a clean integration / transfer of a tenant into an existing hybrid infrastructure.
Advantages | Disadvantages |
---|---|
Central management in Active Directory | Lokales Active Directory Management |
Active Directory Integration | Infrastructure complexity |
Use of LDAP / Kerberos | Infrastructure costs (Active Directory, AD Connect etc.) |
Device Management through Intune or OnPremise systems like SCCM | Azure Active Directory Features (Dynamic AssignmentâŚ) |
Legacy application integration |
An additional way of considering a hybrid / synchronized identity would be to connect the local Active Directory (source) to the AD Connect of the target environment to minimize the impact on the existing source structure (OnPremise Active Directory).
In this scenario, synchronization requirements based on supported Azure AD Connect Sync topologies may need to be checked in advance. More information about this at: https://docs.microsoft.com/de-de/azure/active-directory/hybrid/plan-connect-topologies
Advantages | Disadvantages |
---|---|
Minimal impact on local infrastructures (Active Directories) | Authentication |
Active Directory Integration | Increased management effort |
Use of LDAP / Kerberos | If necessary complex sync rules |
Existing permissions can still be used | Network connection of the Active Directory |
User / group objects can still be used | Infrastructure costs & operation |
On- & offboarding processes can still be used | Sync Dupliakte & Sync Errors |
Other options and transitions can also be that a new tenant / Azure Active Directory is set up and that both tenants are transferred to a new infrastructure. This can be the case, for example, with name changes, since the previous names may no longer be used and is usually used as the tenant name (SharePoint) and can not be changed
]]>There is no universal blueprint for migration and merging. It always depends on the requirements and the future strategy. The listed options are only a few excerpts from the possibilities that exist. Depending on the requirements or new features etc. these can be edited and adapted. This list and the blog entry should serve as a basis and impulse to think about the best possible approach to ensure an efficient migration in the long term.
But also in the Azure Cloud in Logic Apps and Azure Functions you should look at the cost situation in advance, so that this should also be part of the decision.
Therefore, here is an example of how the Graph API can be used in Azure Functions to get all group members of a group and then remove deactivated users from the group.
In the first step we get the corresponding group members from the Graph API (/groups). The Graph API returns only 100 entries by default, so simple API request is not enough and we have to work with NextLink.
1 | $responseMember = Invoke-RestMethod -Method Get -Uri https://graph.microsoft.com/v1.0/groups/{GroupId}/members/ -Headers $graphHeader -Body $body |
In the next step we create a while loop and let it run until all data is contained in our specified variable.
1 |
|
If we now set a count on our variable, the value should contain the number of current members of the group. This can be verified e.g. by calling the group in the Azure Portal (Overview).
1 | ($aadGroupMember).count |
In the next step we use a foreach loop and within the loop we check the status of each user and if it is disabled this user will be removed from the group.
1 | foreach ($aadmember in $aadGroupMember) { |
In this GitHub repository, I habe created some templates for GitHub Actions and Azure Pipelines to start and stop AKS and ADX clusters. You can use these templates in your own pipelines to start and stop your services based on cron triggers/schedules.
The repo only contains the templates, you need to write the calling pipelines yourself. You can check my blog series on GitHub Actions and Azure Pipelines here.
The first thing most consultants say during cloud evaluations, onboardings, and shifting workloads is: turn off your VMs, when you donât need them. While this is a valid point, the same thing goes for services based on VMs, like AKS and ADX.
While I like the DevOps approach better: deploy the infrastructure, perform some kind of task, destroy the infrastructure when it is not required anymore. However, this can be a little too much overhead when you have several developers or want to run several tests at once.
Dev/Test/QA environments often run 24/7, and in the cloud, this produces a lot of consumption. I found, AKS and ADX clusters for such environments often donât need to run that much, during business hours is enough.
I created pipeline templates for GitHub Actions and Azure Pipelines that you can use to start and stop your AKS/ADX cluster entirely, if you want to.
Both are based on a simple PowerShell script and can be four in this GitHub repository.
This is a multi part article - find the other parts at the links below:
Templates are great to reuse pipelines and avoid redundant work. You can declare necessary steps and reuse them in several pipelines. I love this feature and it was the one feature, that stopped me from moving all my pipelines to GitHub actions. But this has been taken care of and we are good to go đ
In Azure Pipelines, templates can be used for all scopes: steps, jobs, stages. You can declare:
From a folder structure perspective, I create a folder templates within the .azuredevops folder in the root of the repository. This folder contains three sub-folders:
and each folder contains the actual template files.
If you want to pass parameters to templates, you have to use the parameters key word and declare them as shown below.
Here is an example with steps (.azuredevops/templates/steps/copy_files_build_image.yaml):
1 | parameters: |
You can then use the template within the pipeline:
(Pipelines are stored in the .azuredevops/pipelines/ directory)
1 | trigger: none |
You can find more examples in my GitHub repo here.
Github Actions has a slightly different apporach but all in all, it accomplishes the same result. The official name for this feature is Reusable Workflows and you can read more about it here.
The terms used by GitHub are as followed:
The term template is used by me to refer to the called workflow.
The only difference between the two is the trigger. The template must have the following trigger:
1 | on: |
Here you can define the parameters you want to pass to the template:
1 | on: |
Another difference is, you can differentiate between regular parameters (or inputs) and secrets. Secrets are treated differently, because the content will never be printed to the logs, if the are referenced in outputs.
You can use the template with the following example:
1 | ... |
The uses clause references the file as followed:
<organization name>\<repository name>\<entire path to the file>@<branch name>
The with clause contains the inputs and the secrets part the secrets.
You have to write entire jobs in a template and also create a job to call it within the caller workflow, which can be a bit confusing. Therefore, you cannot create something like in Azure Pipelines, in which you can call a template within a template. I think, this makes in a little cleaner, because of less clutter.
I hope, you now have a better understanding of GitHub Actions and how it differs from Azure Piplines. Personally, I like GitHub Actions a litte better, just because there is less cluter overall and this makes it cleaner. Also, the possiblity to automate tasks related to issues and projects is really nice.
Of course, I wasnât able to cover every aspect of the migration, but this should get you started đ
GitHub offers a way to migrate from Azure DevOps to GitHub - using the GitHub CLI and another CLI - ado2gh, however, this tool only migrate repos, boards and can point the pipelines to use GitHub instead of Azure Repos. If you want switch entirely to GitHub, you have to rewrite the pipelines yourself.
I wrote some examples, check out the following GitHub repository.
]]>This is a multi part article - find the other parts at the links below:
Deployments in Azure Pipelines and GitHub Actions are treated a little differently than âregularâ CI pipelines, because they can reference Environments. The concept of environments is available in both tools and they cover a lot of the same things but are also a little different. I will focus on the approval part for the environments.
1 | ... |
The entire deployment schema can be found here.
Deployments do cover much more than just environments in Azure Pipelines. You can configure canary deployments, different hooks to react to failed deployments, run pre-deployment tasks and so forth.
Environments are create at repository level, in organization repositories to be precise.
Environments allow the configuration of two parts, protection policies and secrets. Secrets are mentioned in more detail in the Secrets / Credentials section of this post. Protection policies allow the configuration of up to six approvers (user/teams) and to set a wait timer.
1 | ... |
During a run, it looks like this:
You can read more about the use of environments here.
Well, variables in CI/CD environments are bit of a rabbit hole because there is a lot built in and you can also set them yourself or create them with a step. There are environment variables, system variables and user defined variables. I cannot go over all of them but I will go into more detail about how you can set and consume them and in what way.
When you take a look at the docs you can find all of them.
As shown in the Secrets / Credentials section of this post, you can define variables for each pipeline in the portal. I would recommend using this only, if you have to because it can be hard to debug since it is not part of the pipeline definition.
The better way is, to set the variables as part of the pipeline definition:
1 | ... |
or:
1 | ... |
You can even use conditions to set variables. For instance, based on the branch:
1 | ... |
This can be very convenient. The value variables[âBuild.SourceBranchNameâ] is a reference to a build variable.
Now, you can pass the variable to a step:
1 | ... |
Keep in mind, variables are referenced with the following syntax: $(<variable name>)
You can create environment variables within scripts and use them in later steps:
1 | ... |
Here, you also reference them with the following syntax: $(<variable name>)
Artifacts can be used to publish software packages (NuGet, NPM, Maven, âŚ) or pass build artifact files (e.g. .exe, .jar, ⌠files) to other jobs in the pipeline - You can build a .jar-file in the build step of your pipeline and consume it during the build of a docker container.
I will focus on the part of passing artifacts to other jobs of the build pipeline.
Azure Pipelines has a step for creating (publishing) an artifact and one for consuming it.
The example below is from the Microsoft Docs and creates a .txt file and publishes the artifact called drop.
1 | ... |
1 | ... |
1 | ... |
In some cases, it makes sense to create a zip-archive first and publishing only the zip file.
Find the full reference of Azure Pipeline Artifacts here.
GitHub has a very similar approach. For instance, you create .jar-file during the build and publish it as an artifact for the next job to consume it. It also gets a name. If you publish multiple artifacts during a build, you can download all at once when you remove the name parameter from the download step.
By default, artifacts are stored for 90 days, but I would recommend specifying the retention time to the lowest possible value that you require, since this increases the price of your organization and most of the time, a long retention is not necessary.
1 | ... |
You can find the GitHub docs about artifacts here.
Accessing and passing secrets is entirely different in both tools - well, sort of. Ok, it can be đ Let me explain.
Azure DevOps has several ways to store secrets at rest, within the tool itself, you store them per pipeline as (secret) variables, you can create libraries, in which you can create variable groups and secure files to store them grouped together. You can even link a variable group to an Azure KeyVault and allow the variable group to access the secrets there.
In GitHub Actions, it is a little easier - you can create secrets per repository, organization wide secrets and within environments - thats it. It is worth mentioning, organization wide secrets for private repositories require an enterprise license, which might take away that option entirely and leaves you only with secrets stored in your repositories.
Personally, I like the simple approach of GitHub here, but the KeyVault integration of Azure Pipelines is really nice too, so take this switch with a grain of salt.
You can defince secrets as variables per pipeline and flag them as secret values:
Libraries on the other hand have several ways to store and access secrets as well as regular variables. You can create a variable group that stores several entries, each entry can be marked as a secret and later in your pipeline, you can reference the entire variable group.
1 | ... |
You can connect the variable group to an existing Azure Key Vault by using the toggle and selecting the Vault you want to use.
Another way is to use secure files. Secure files are just regular files that will be treated as secrets. Usually, secrets are key-value pairs but there are other forms too, a certificate for instance cannot be stored as key-value pair because of its format and this is where secure files come into play.
You can upload a secret file and download it during a pipeline run to use it.
1 | ... |
You can read more about Variable Groups.
In GitHub Actions, you can use secrets as key-value pairs only. They get a name and the secret value itself. Storing and accessing them is where it gets interesting.
If your secrets have the same name at different scopes (organization, repo, environment), the lowest level takes precedence
If a secret with the same name exists at multiple levels, the secret at the lower level takes precedence. For example, if an organization-level secret has the same name as a repository-level secret, then the repository-level secret takes precedence. Similarly, if an organization, repository, and environment all have a secret with the same name, the environment-level secret takes precedence. >source
Environment > Repository > Organization
As mentioned above, if you want to use organization wide secrets within private repos, you need an enterprise license, however, it is free to use in public repos.
You can reference secrets as followed:
1 | ... |
Repository secrets are declared on a repository level. Go to the Settings tab on your repository.
1 | ... |
To create an environment, check the Deployments / Environments section of this post.
You can mange secrets there:
1 | ... |
In part 4 we will check out templates, draw a conclusion and go over examples.
]]>This is a multi part article with the following parts:
Based on these requirements, an Office 365 Single Tenant is usually preferred and necessary. This article looks at merging the existing Office 365 tenants and data migration.
The first step, as with every project, is planning. I would like to give some hints and impulses on a technical basis from some experiences. Please understand this points for planning not as a complete project plan, just as a part to assist the technical migration planning.
First of all, after the business requirements for merging the business units/companies, there is the scenario. In the first step, this includes determining which companies (Office 365 tenants) should be transferred. For example, should Company-B (Tenant-B) be integrated into Company-A (Tenant-A) or should a new tenant be created for both companies or should only some business units potentially be transferred.
All these are basic considerations that should be discussed, analyzed and described in advance. The following factors should always be considered:
]]>A tenant migration is usually an complex, cost-intensive project. The user impact and the possible service restrictions should be planned into the project. As more services are used in the source environments (Azure / Office 365), migration scenarios become more complex, so this fundamental decision should be made for the long term.
To set a filter query, we open Power Automate and create a new action SharePoint âGet itemsâ. In this action âGet itemsâ we can show under âShow Advanced Optionsâ all further options for reading the list items, as well as the option âFilter queryâ.
Within this filter query we can now filter and limit our list items based on values from our sharepoint list or using expressions.
To filter on list item values, we need in the first step the column name and the corresponding value to filter on. As in our example, the column name is âProductâ and the value is âCloudâ. This means we only want to output list items with these values for further processing in the Power Automate workflow.
Filter Query Product = Cloud
(Product eq 'Cloud')
For example, if you want to filter for multiple values, you can easily combine them using âandâ or âorâ.
(Product eq 'Cloud') and (ProductDescription eq 'Test') or (Title eq 'Demo')
There are of course a lot of other possibilities to filter for specific values, for example with âstartswithâ, âendswithâ or âlenghâ. , âendswithâ or âlenghâ To give just a few examples.
startswith(Title, 'Demo')
startswith(Title, 'Demo')
If the filter does not work in your case, it is usually because the wrong column name is entered in the query. This can be caused by the column being renamed or containing spaces or special characters. In this case, it is a good idea to run the flow once without the query and to take a closer look at the action output in order to copy the correct column name into the flow.
You can use date fields in the query field, too, but you might want to use the current date to match the SharePoint entry.
For this purpose there is a predefined expression âutcNow()â in Power Automate which returns the current date information. This can be easily integrated into queries.
In some cases, the comparison of dates is not sufficient because, for example, the list element is to be processed in the workflow before the date has expired.
Letâs assume the scenario that 10 days before the âExpiration Dateâ is reached, the SharePoint list entry should no longer be filtered and an email notification should be sent to a specific person in the workflow.
We can implement this requirement using expression as follows. First we define our selected column âExpirationdateâ next we select the desired operator. Which in this case is the range operator âltâ.
Equality operators:
eq
: Test whether a field is equal to a constant valuene
: Test whether a field is not equal to a constant valueRange operators:
gt
: Test whether a field is greater than a constant valuelt
: Test whether a field is less than a constant valuege
: Test whether a field is greater than or equal to a constant valuele
: Test whether a field is less than or equal to a constant valueIn the next step, we format utcNow() and extend it with our 10 days so that list items are no longer filtered 10 days before the Expiration Date is reached. Furthermore we set the formatting to âyyyy-MM-ddâ.
(Expirationdate lt '@{formatDateTime(addDays(utcNow(),+10),'yyyy-MM-dd')}')
If we take a look at the flow history after successful execution, we can see that utcNow has become a calculated field and 10 days have been added to todayâs date.
]]>This is a multi part article - find the other parts at the links below:
Trigger define when a pipeline starts. All triggers are event based and vary from a manual trigger, a push to repository and many more.
Azure DevOps set to have a couple of triggers:
It starts with the trigger key word:
1 | trigger: |
Timer trigger are configured with the schedule key word:
1 | schedules: |
Scheduled pipelines can still have a trigger based on commits, but if you do not need this or if you do not need a trigger at all, set the trigger as followed:
1 | trigger: none |
GitHub Actions on the other hand, can be triggered in several ways.
Everything starts with the on key word:
1 | name: build and deploy |
By default, Azure Pipelines can be triggered manually from the portal, GitHub Actions must have a certain trigger - workflow_dispatch.
Adding parameters at runtime can be great to make quick modifications for an manual triggered pipeline. For instance, you could choose the agent operating system.
1 | trigger: none |
You can reference parameters as followed:
1 | vmImage: ${{ parameters.<parameterName> }} |
You can learn more about runtime parameters here.
In GitHub Actions, runtime parameters are part of the trigger workflow_dispatch. We can add inputs to the trigger and reference them later in the pipeline.
1 | name: build and deploy |
You can reference workflow_dispatch input as followed:
1 | runs-on: ${{ github.event.inputs.<input name> }} |
You can learn more about workflow_dispatch input here.
By default, Azure Pipelines Stages and jobs are executed one after the other. You can create dependencies or even start them at the same time.
GitHub Actions executes jobs by default in parallel.
Therefore, the approach is entirely different, but its quickly configured.
Dependencies are created using the dependsOn key word within stages or jobs. Conditions are added with the condition key word.
1 | ... |
Now, the stage Deploy depends on the stage Build and is has to ran successfully, if it fails, Deploy will not run. The same conditions and dependsOn settings can be applied to jobs.
If you want to start two stages or jobs at the same time, add the following dependsOn statement: dependsOn: []
1 | - stage: Build_Windows |
Now, Build_Windows and Build_Linux start at the same time and Deploy will wait until both are ready and ran successful.
You can find the full list of conditions here.
As mentioned above, GitHub Actions behaves totally different than Azure Pipelines. Concurrency is built-in the default.
1 | ... |
Both jobs would executed simultaneously. To create a dependency, the needs key word is required.
1 | ... |
To create a condition, a simple if-statement can be added. By default when you use the needs key word, the job that depends on the previous one has to run successfully in order to be executed.
1 | ... |
Create multiple dependencies, you can use the following snippet:
1 | ... |
Agents or Runners are the machines, the pipeline will be executed on. Both Azure Pipelines and GitHub Action have cloud hosted, publicly available agents with several different operating systems:
Each agent has software preinstalled so you donât have to worry about it. However, sometimes it is necessary to install a specific version or to install tools you require yourself.
The table below contains links to the agents and how they are set up.
Azure Pipelines | GitHub Actions | |
---|---|---|
Available Worker | Link | Link |
Installed Software | find the link in the included Software row of the link above | Link |
You can also host your own agents either on a virtual machine or as container if you have privacy requirements or need more performance.
It is important to notice, that Azure DevOps charges 15$ per month per self-hosted agent and GitHub does not apply any charges.
Kind of Agent | Docs |
---|---|
Self-hosted Windows agents | Link |
Self-hosted macOS agents | Link |
Self-hosted Linux agents | Link |
Self-hosted agent in Docker | Link |
In Azure Pipelines, the agent can be configured in two places, either on the stage or the job. The job can inherit the configuration from the stage.
It is defined by the pool key word
1 | - stage: Build |
The job can also have their own pool configured:
1 | - stage: Build |
GitHub Actions define their runners per job using the runs-on key word:
1 | ... |
Self-Hosted runners can be configured per organization within the settings:
Select the operating system you want to deploy to and follow the instructions below.
In part 3 we will check out deployments, variables, secrets, and more!
]]>