Tag Archives: AzureDevOps

Azure Pipelines – Deploy SQL Changes with Service Principal

For several years, Microsoft has provided a project type in Visual Studio (previously aliased as BIDS) called SQL Server Database. This project allows developers to manage database schema in a project. When the project is built, a DACPAC is generated, which is used by sqlpackage.exe to deploy (technically publish) changes to a server. This is available for SQL Server, as well as Azure SQL Database/Azure SQL Managed Instance.

In Azure DevOps, a built-in task, Azure SQL Database deployment, is available to deploy DACPACs for Azure SQL DB/Azure SQL Managed Instance. (Going forward, I will shorten these two services to simply Azure SQL) The Classic Editor for Azure Pipelines provides a nice user interface to configure the task. It is important to remember when the pipeline executes, a YAML file will be executed. In YAML, everything is stored as a string. Within the properties, there are multiple authentication types, as shown in the screenshot below. For the 4 of the top 5 (exclude Active Directory – Integrated), passwords will be stored in the configuration…in plain text.

To provide a more secure deployment environment, these options are not optimal. However, the Service Principal is optimal, as it is an Azure Active Directory object, which can be referenced by GUIDs and a secret.

Let’s get started…

Create an App Registration

Creating an App Registration isn’t difficult, but it does require elevated permissions in Azure Active Directory. Unless you are an administrator, your infrastructure engineer or Azure administrator will need to perform the set up.

  • In the Azure Portal, search for your organization’s Azure Active Directory. Within the blade, locate App registrations from the left navigation bar, under Manage.
  • The organization’s existing enterprise applications will be listed. Click New registration from the top of the blade.
  • Define the Name of the App registration to get started. Under Supported account types, the most likely option is the first, Accounts in this organizational directory only. Click Register to create the registration.
  • Upon completion, the portal will take you to the application, which has information like the following screenshot.
  • Keep track of the Application (client) ID and Directory (tenant ID) from the Essentials section of the Overview.
  • Next, a secret needs to be created in the App registration. Under Manage on the navigation bar, choose Certificates & secrets.
  • In the middle of the screen, choose New client secret. Enter a description and the duration the secret should be valid. Click Add to create the secret.
  • The new secret will appear in the list. In the Value column, copy the secret to the clipboard and temporarily save it to a text editor. (Note: this needs to be done before navigating away from the secrets blade. Otherwise, the secret will not be available to copy and a new secret will need to be created.)

Create the Azure DevOps Service Connection

With the App registration created, the Azure DevOps service connection can be created. In PowerShell session, use the following script. From the steps above, <Secret>, <ApplicationID> and <DirectoryID> are known. The SubscriptionID can be found in the Azure Portal, searching for the company’s subscription that has the SQL instance. <DisplayName> is what the name of the Service Connection will be within Azure DevOps. The –organization argument needs to be filled out with your organization name in Azure DevOps. Your default URL to Azure DevOps is what you want here. Finally, <Project> is the Azure DevOps project in which you want to create the Service Connection.

$env:AZURE_DEVOPS_EXT_AZURE_RM_SERVICE_PRINCIPAL_KEY="<Secret>"
az devops service-endpoint azurerm create `
          --azure-rm-service-principal-id <ApplicationID>  `
          --azure-rm-subscription-id  <SubscriptionID>`
          --azure-rm-tenant-id <DirectoryID> `
          --name <DisplayName> `
          --organization "https://dev.azure.com/<Organization>" `
          --project <Project>

Assign SQL Permissions

The last step is to assign the App registration to the Azure SQL database. Within SQL Server Management Studio, or your preferred T-SQL editor connected to the database, run the following T-SQL statements, logged in as database administrator. Two assumptions are being made. First, the service connection name is WorkshopServiceConnection, and second, it should be added to the db_owner role. Please adjust these appropriately for your environment.

create user [WorkshopServiceConnection] from external provider
exec sp_addrolemember @rolename = 'db_owner', @memberName = 'WorkshopServiceConnection'

Wrap-up

With these items configured, the Azure Pipeline can now deploy DACPACs to an Azure SQL database in a much more secure manner. Passwords are not being stored in the clear. If someone gets a hold of your Azure DevOps project, they might see the application ID in Azure Active Directory, but the secret won’t be visible. The authentication to Azure SQL is managed through the App registration, which needs the secret, along with the ApplicationID.

Azure Dev Ops – Job-Focused vs Task-Focused

This is the first post in the series, Azure Dev Ops Odds and Ends. The starting page has information about the code environment I’m working with, as well as links to other posts in this series.

The first step to consolidate builds into a single pipeline was to gather code from four repositories into a single location. From there, the projects could be built and deployed as a set. Based on our existing pipelines, I needed to create jobs, first to checkout code, before jobs to build and publish the solutions. Below is the start of the “BuildDACPACs” stage of the pipeline.

- stage: BuildDACPACs
  jobs:
  - job: BuildDACPACs
    displayName: Build and publish database DACPACs
    workspace:
      clean: all
    steps:
    # First step: check out the repositories with the source code and other scripts the pipeline needs.
    - checkout: git://AutomatedTestingPoC/AutomatedTestingPoC #Optional but included to gain access to artifacts in the "root" repo, such as PowerShell scripts needed in the deployment process
    - checkout: git://AutomatedTestingPoC/Staging
    - checkout: git://AutomatedTestingPoC/SystemLogging
    - checkout: git://AutomatedTestingPoC/Warehouse
    - checkout: git://AutomatedTestingPoC/AzureDataFactory
- job: BuildWarehouseDB
    displayName: Build Warehouse database project
    steps:
    # Second step: Build and publish the Warehouse database project 
    #   Two things:
    #     1) the DACPAC is buried, so it is being copied to a pre-defined 'ArtifactStagingDirectory' from the bin folder
    #     2) There are 2 settings files for publishing; one for Automated Testing and another for other environments.  Those need to be copied into the artifacts folder, along with the DACPAC.
    - task: VSBuild@1
      displayName: Build Warehouse Database
      inputs:
       solution: "$(Build.SourcesDirectory)/Warehouse/**/*.sln"
       platform: $(buildPlatform)
       configuration: $(buildConfiguration)
       clean: true
    - powershell: copy-item (get-childItem "$($env:Build_SourcesDirectory)\Warehouse" -Filter *.dacpac -Recurse | select fullname).FullName $($env:Build_ArtifactStagingDirectory)
      displayName: Copy Warehouse DACPAC
    - powershell: copy-item (get-childItem "$($env:Build_SourcesDirectory)\Warehouse" -Filter Default.publish.xml -Recurse | select fullname).FullName "$($env:Build_ArtifactStagingDirectory)\Warehouse.Publish.xml"
      displayName: Copy Warehouse Publish Settings XML
    - task: PublishBuildArtifacts@1
      displayName: Publish Warehouse Database
      inputs:
        PathtoPublish: '$(Build.ArtifactStagingDirectory)'
        ArtifactName: 'BlueMonkey_artifacts'
        publishLocation: 'Container'

The syntax of the YAML is good and the pipeline started to run. The checkouts worked as expected and there should be 4 directories with code in there. The Warehouse database build was first to start, and it failed on the build process itself. In Image #1, the error states the solution couldn’t be found in the folder path, ‘D:\a\1\s\Warehouse\**\*.sln’. Not being the most observant character, I was stumped and didn’t understand what was happening. After some digging, I realized the step before the build was doing exactly what it said it was doing. Imagine that!! I had checked out code earlier, but the Warehouse repository was being checked out again. Looking through the YAML above, there wasn’t an explicit step to check out the code again. Where is this being called?

Image #1 – Pipeline logs; Warehouse database failed

It turns out, when a job is defined, ADO performs a number of steps, including setting up an agent. This occurred for the first job to check out code, and it happened again to build the database. An automated step is to check out code from the repository in which the YAML file is defined. In order to not wipe out, we can’t use a job for each major process to build and deploy the solution. On line 5 of the YAML above, a clue was provided, by the workspace: clean option. That led me to dig deeper into what was happening and to review the logs in more detail.

Starting: Initialize job
Agent name: 'Hosted Agent'
Agent machine name: 'WIN-6C2TM2SQ98G'
Current agent version: '2.211.0'
Operating System
Runner Image
Runner Image Provisioner
Current image version: '20221002.2'
Agent running as: 'VssAdministrator'
Prepare build directory.
Set build variables.
Download all required tasks.
Downloading task: VSBuild (1.208.0)
Downloading task: PowerShell (2.210.0)
Downloading task: PublishBuildArtifacts (1.200.0)
Checking job knob settings.
   Knob: AgentToolsDirectory = C:\hostedtoolcache\windows Source: ${AGENT_TOOLSDIRECTORY} 
   Knob: AgentPerflog = C:\agents\perflog Source: ${VSTS_AGENT_PERFLOG} 
Finished checking job knob settings.
Start tracking orphan processes.
Finishing: Initialize job

Instead, we need to structure the YAML to run many tasks within a single job step. The build and publish tasks are now contained within a single job, BuildArtifacts. When the job is initialized, the current repository is still checked out, but the next step explicitly checks out all of the repositories, placing each one into its own subfolder, under $(Build.SourcesDirectory).

- stage: BuildArtifacts
  jobs:
  - job: BuildArtifacts
    displayName: Build and publish database DACPACs and ADF code
    workspace:
      clean: all
    steps:
    # First step: check out the repositories with the source code and other scripts the pipeline needs.
    - checkout: git://AutomatedTestingPoC/AutomatedTestingPoC #Optional but included to gain access to artifacts in the "root" repo, such as PowerShell scripts needed in the deployment process
    - checkout: git://AutomatedTestingPoC/Staging
    - checkout: git://AutomatedTestingPoC/SystemLogging
    - checkout: git://AutomatedTestingPoC/Warehouse
    - checkout: git://AutomatedTestingPoC/AzureDataFactory
    # Second step: Build and publish the Warehouse database project 
    #   Two things:
    #     1) the DACPAC is buried, so it is being copied to a pre-defined 'ArtifactStagingDirectory' from the bin folder
    #     2) There are 2 settings files for publishing; one for Automated Testing and another for other environments.  Those need to be copied into the artifacts folder, along with the DACPAC.
    - task: VSBuild@1
      displayName: Build Warehouse Database
      inputs:
       solution: "$(Build.SourcesDirectory)/Warehouse/**/*.sln"
       platform: $(buildPlatform)
       configuration: $(buildConfiguration)
       clean: true
    - powershell: copy-item (get-childItem "$($env:Build_SourcesDirectory)\Warehouse" -Filter *.dacpac -Recurse | select fullname).FullName $($env:Build_ArtifactStagingDirectory)
      displayName: Copy Warehouse DACPAC
    - powershell: copy-item (get-childItem "$($env:Build_SourcesDirectory)\Warehouse" -Filter Default.publish.xml -Recurse | select fullname).FullName "$($env:Build_ArtifactStagingDirectory)\Warehouse.Publish.xml"
      displayName: Copy Warehouse Publish Settings XML
    - task: PublishBuildArtifacts@1
      displayName: Publish Warehouse Database
      inputs:
        PathtoPublish: '$(Build.ArtifactStagingDirectory)'
        ArtifactName: 'BlueMonkey_artifacts'
        publishLocation: 'Container'

With the each repository’s code stored in its own subfolder, as originally intended, when the build task is executed, the code is found and the build process proceeds.

Image #2 – Pipeline logs; Warehouse build succeeded

This was my first major hurdle, and it clearly defined there is a lot going on with each YAML instruction. Implementing pipelines is greatly simplified because of that. At the same time, more care does have to be taken to ensure the pipeline behaves as you expect.

Azure Dev Ops Pipeline Odds and Ends

When one works in a smaller company, there is an expectation of wearing many hats. As the Data Operations Team Lead, I’m no different. In addition to working with SQL Server, Azure infrastructure, Azure Data Factory and more, I have started to tackle Azure Dev Ops. Specifically, I’m working to enhance, or streamline, existing build and release pipelines to work better in our organization. Learning how pipelines are built and behave, while re-vamping the process has led to some deeper dives into YAML and what Azure Dev Ops is doing.

Some of the developers on our team are relatively new to developing code in a corporate environment and using source control. To help reduce issues around check-ins and code merging, we have made some conservative decisions. The data warehouse solution consists of 3 database projects, Staging, Warehouse and SystemLogging, as well as Azure Data Factory artifacts. Each solution is stored in its own repository within the same Azure Dev Ops project. The build and release pipeline are within a proof of concept project, AutomationTestingPoC.

In this series of posts, I’ll document some of the nuances found in Azure Dev Ops and try to explain their benefits and drawbacks. The links below are listed chronologically, as I’m documenting them.

Check back often, as the list will grow and topics will be linked.