SQL Change Automation and GitLab CI/CD (a.k.a. Oh this is fun on Windows)

“You never know what you can do until you try, and very few try unless they have to.”
C.S. Lewis

Well I don’t have to, but many of the people I speak to on a daily basis are moving into GitLab, so it’s about time I tried it! You can find here testament to the mistakes I make as I try to set up a full end-to-end database change management process with SQL Change Automation and GitLab.

Will it all work perfectly? I don’t doubt that everything will fall over at some point, but let’s see how we get on all the same, and hopefully if you’re setting up this same pipeline, you’ll be able to avoid the errors and failings I inevitably cause! So here we go!

ready come on GIF

Let’s set up a GitLab Project (and rename the default branch)

Naturally, I didn’t have a GitLab account, so I had to set one up. I’m assuming that if you’re using it already or you’ve just started using it you’re taking advantage of the more business features but I’ve just stuck with the good ol’ free account for now! It was remarkably simple, sign up, email address, confirm and here we are:

Ok there is something very cool I like about setting up a new project, can you tell what it is?

You can completely set up a new blank project but they have templates, you can import projects OR, and I love this, you can setup a full CI/CD pipeline from another repo! Having done this before in Azure DevOps it was not easy, let me tell you. It really seems like Azure DevOps hates you for setting up CI/CD from an external repo, even though it has plenty of helpful ways of doing so!

So I initialized my repository with a README and updated it:

Don’t ever say I’m not descriptive enough!

The first thing I did was a renamed my default branch to ‘trunk’ by going to branches, creating the new branch and then in Settings > Repository changing it to the default and then swapping out the protected status with the outdated master:

Then finally delete the old default in Repository > Branches:

Excellent. Now it’s time to clone trunk onto my machine as we will need the local repository to put our change automation project in!

I created a folder called GitLab test and cloned the mostly empty repository into it:

Simple!

Create a new SQL Change Automation project and push it to trunk

In SSMS I opened up the most recent version of SQL Change Automation an created a new project called “DoggosAreCoolDB” using a copy of a Dev database I had lying around from a previous demonstration (BlogsDotRedgate):

Then I created my baseline as a migration script against the up-stream copy, BlogsDotRedgate_Integration, because who has access to Prod for this? Am I right? *cough & shifty eyes* not me!

I successfully generated my baseline and a change script (I added a column to a table, nice and simple) and then committed them to my local repo, and pushed! Forget branching, today isn’t about that, we’re just PUSHING TO TRUNK, WOO-HOO!

Setting up the CI/CD Pipeline

Now that we have our project and migrations in GitLab we can build out a pipeline! So first stop I went straight to CI/CD > Pipelines and was presented and I hit “Get Started”:

They immediately throw you into a Quick Start “Help” style guide which is immediately a little un-intuitive but surprisingly helpful if you read the whole thing. Effectively we need a YAML file called .gitlab-ci.yml that will store our pipeline as code telling it how and where to build, and we need a runner to actually fire up and execute these steps.

In my experience with some other CI/CD tools, it’s been advantageous to actually create the Runner / Agent first on the machines you’re going to be using, so as I just have my laptop to do this on, I will set one up on there! I found the full documentation for a Windows Runner here, and followed it just so I would have it available.

The GitLab Runner was up and running in my services but I’ll be darned if I can see them anywhere in GitLab…

confused britney spears GIF

Aha! So it turns out after a bit of digging that you need to register the runner specifically using the CI/CD section on the project settings, that was probably my bad for not reading the documentation thoroughly but my counter-argument… who actually does? So I issued the register command, applied tags and a description and chose my runner type, I chose shell because I need to be able to run PowerShell on the machine (I’ll need the SQL Change Automation PowerShell components available on the machine where the Build and Deployment are happening of course):

I’ve got the runner on the machine, I’m using an instance of SQL Server to build against, now I just need the YAML file (fortunately GitLab has full documentation for how to structure this as well!)

So I can build my project I’m going to need to know where the repo is cloned to during the process (i.e. to find the .sqlproj file) so by taking a look I managed to find a list of environment variables that can be used in the YAML file, just to be sure though, I created and committed the most basic YAML file that would just echo back the location of the cloned files:

stages:
  - DatabaseBuild

Build:
  stage: DatabaseBuild
  tags: 
   - sql
  script: 
   - echo $env:CI_PROJECT_DIR

After this let me know the environment variable worked correctly and the build pipeline was being fired up correctly on my private runner, I tried something a little more ambitious, building the .sqlproj file using the cmdlet reference from the SQL Change Automation documentation for help:

stages:
  - DatabaseBuild

Build:
  stage: DatabaseBuild
  tags: 
   - sql
  script: 
   - $projectPath = $env:CI_PROJECT_DIR
   - $project = "$projectPath\DoggosAreCoolDB\DoggosAreCoolDB.sqlproj"
   - $validatedProject = $project | Invoke-DatabaseBuild -TemporaryDatabaseServer "Data Source=PSE-LT-CHRISU\"

and we successfully built a database!

All that’s left to do now is 2 things:

  1. Create a NuGet package as part of the CI build
  2. Release the database changes to the target DB

I’m still using the same machine for the release portion too, so naturally I can use the same runner for this, if you have other servers you’re deploying to you will of course need additional runners.

We can very easily extend what we already have in our YAML file by just telling the process to create and export a new build artifact – I’m going to name it the same as everything else, and then append the BuildId to the end of the file so we always get something unique:

stages:
  - DatabaseBuild

Build:
  stage: DatabaseBuild
  tags: 
   - sql
  script: 
   - $projectPath = $env:CI_PROJECT_DIR
   - $project = "$projectPath\DoggosAreCoolDB\DoggosAreCoolDB.sqlproj"
   - $validatedProject = $project | Invoke-DatabaseBuild -TemporaryDatabaseServer "Data Source=PSE-LT-CHRISU\"
   - $buildArtifact = $validatedProject | New-DatabaseBuildArtifact -PackageId DoggosAreCool.Database -PackageVersion 1.$env:CI_JOB_ID
   - $buildArtifact | Export-DatabaseBuildArtifact -Path "$env:CI_PROJECT_DIR\Export"
  artifacts:
    paths:
     - $env:CI_PROJECT_DIR\Export\DoggosAreCool.Database.1.$env:CI_JOB_ID.nupkg
    expire_in: 1 week

You’ll notice how I’m exporting the NuGet package to the project directory and then uploading it, this is so that we’ll have access to it to release but also so that we can use the artifacts argument in our YAML to upload the file and make it a downloadable package through the GitLab interface (if you go to that SPECIFIC job):

Whilst we’re on a roll here (and things haven’t gone wrong for a while) I’m going to add 2 additional stages ALL AT ONCE to both “Create a Database Release Artifact” and “Deploy from a Database Release Artifact” using, once again, the SQL Change Automation PowerShell cmdlets.

Woo-Hoo! I’m invincible!

I broke it.

Can you see what I did wrong? The error is:

New-DatabaseReleaseArtifact : The specified value for the Source parameter is neither a valid
41database connection string nor a path to an existing NuGet package file or scripts folder:
42'CI_PROJECT_DIR\Export\DoggosAreCool.Database.1.725147351.nupkg'

So 2 fun things. 1 – I forgot to highlight there was an environment variable at one point, so it was just looking for the name of the variable in the path and 2) it keeps erroring out saying my NuGet file isn’t a NuGet file, weird right?

On further inspection it is yet another mistake I made. I’m using the job ID to name the NuGet package, which means when it tries to find the file it’s 2 steps ahead because each stage is counted as a different job! Duh!

sylvester stallone facepalm GIF

A few quick changes should hopefully sort this out! I’m going to put the instance of the pipeline ID in ($env:CI_PIPELINE_ID) and see if that makes a difference!

Wait. Did it just say the pipeline ran? SUCESSFULLY? That’s exactly what it said! We can verify that this actually happened as well by checking the DatabaseDeploymentResources folder for the Release Artifact to Integration:

And everything is there! Note you won’t have a changes.html report just yet because this is the first time we’ve successfully deployed to Integration, however if we run 1 more change through (I’ll add a stored procedure):

Boom.

Prince Harry Mic Drop GIF

Now of course we can add additional stages to this, for manual intervention or to promote to other environments, but I’m going to call it a win here and retire (until the next post) gracefully. I’m sure you’re all wondering what my final YAML file looked like too – well (counterintuitively) I’ve popped it all into GitHub for you and pasted it below. Enjoy!

stages:
  - DatabaseBuild
  - CreateRelease
  - DeployToIntegration

Build:
  stage: DatabaseBuild
  tags: 
   - sql
  script: 
   - $projectPath = $env:CI_PROJECT_DIR
   - $project = "$projectPath\DoggosAreCoolDB\DoggosAreCoolDB.sqlproj"
   - echo "Building project $project"
   - $validatedProject = $project | Invoke-DatabaseBuild -TemporaryDatabaseServer "Data Source=PSE-LT-CHRISU\"
   - $buildArtifact = $validatedProject | New-DatabaseBuildArtifact -PackageId DoggosAreCool.Database -PackageVersion 1.$env:CI_PIPELINE_ID
   - echo "Exporting artifact to $env:CI_PROJECT_DIR\Export"
   - $buildArtifact | Export-DatabaseBuildArtifact -Path "$env:CI_PROJECT_DIR\Export"
  artifacts:
    paths:
     - $env:CI_PROJECT_DIR\Export\DoggosAreCool.Database.1.$env:CI_PIPELINE_ID.nupkg
    expire_in: 1 week

CreateRelease:
  stage: CreateRelease
  tags: 
   - sql
  script: 
   - $integrationDB = New-DatabaseConnection -ServerInstance "PSE-LT-CHRISU\" -Database "BlogsDotRedgate_Integration"
   - $buildArtifact = "$env:CI_PROJECT_DIR\Export\DoggosAreCool.Database.1.$env:CI_PIPELINE_ID.nupkg"
   - echo "Creating Release Artifact for DoggosAreCuteDB - check C:\DatabaseDeploymentResources\DoggosAreCuteInc\ReleaseArtifacts\$env:CI_PIPELINE_ID\Integration for more information"
   - $releaseArtifact = New-DatabaseReleaseArtifact -Source $buildArtifact -Target $integrationDB
   - $releaseArtifact | Export-DatabaseReleaseArtifact -Path "C:\DatabaseDeploymentResources\DoggosAreCuteInc\ReleaseArtifacts\$env:CI_PIPELINE_ID\Integration" -Format Folder

Integration:
  stage: DeployToIntegration
  tags: 
   - sql
  script: 
   - $integrationDB = New-DatabaseConnection -ServerInstance "PSE-LT-CHRISU\" -Database "BlogsDotRedgate_Integration"
   - echo "Deploying changes to Integration"
   - Import-DatabaseReleaseArtifact -Path "C:\DatabaseDeploymentResources\DoggosAreCuteInc\ReleaseArtifacts\$env:CI_PIPELINE_ID\Integration" | Use-DatabaseReleaseArtifact -DeployTo $integrationDB

Using things weirdly – Part 2: Static Data in the Hybrid Model (w/ Redgate SQL Source Control and Change Automation)

“Where’s your will to be weird?”
Jim Morrison

I had some really positive feedback on my last “Using things weirdly” post, and truth be told, I love to use things weirdly. The number of times I’ve heard: “Oh, well, sure yeah I guess it works that way too…” is just too many to count. So imagine how my eyes lit up when I realized that you can do something weird with one of my favorite things to talk about at the moment, The Hybrid Model.

Now if you don’t know about the Hybrid Model then I would suggest you check out my post here that’s all about the different source control models available for your databases!

The Problem

Across both the State based and Migrations based offerings within the SQL Toolbelt, you have access to something very cool: the ability to easily (alongside the schema) source control your static data. Now don’t ‘@’ me because you think I should be referring to it as “Semi-Static” because it might change occasionally and ‘that’s not truly static then is it?‘; I could easily also refer to it as ‘Lookup Data’ or ‘Reference Data’, basically whatever you class things like “Country Codes” and “Currency Codes” as.

Whatever you call it, it can go into your VCS like any part of the database schema:

SQL Source Control: State Based
SQL Change Automation: Migrations Based

But. One thing that – as of writing this RIGHT NOW (23/07/2020 10:09am BST) – is not officially available in the Hybrid Model combining these two methods… is Static Data. The Data tab in SQL Change Automation even disappears when you set it up as a Hybrid workflow:

And this gives me a sad.

sad monty python and the holy grail GIF

The Solution?

Got your Hybrid Model setup and ready to go? Let’s use it weirdly!

1 – Use SQL Source Control to commit some static data to your state repository. This is as easy as right clicking on your (highlighted green) source control linked database and selecting “Other SQL Source Control Tasks” > “Link/Unlink Static Data“, and picking your tables.

Nothing should be showing in SQL Change Automation:

2 – Unlink SQL Change Automation from the state repository for a moment and link it instead directly to the development database. This will cause it to go into Migrations-First mode. You can do this by clicking the blue source name in SSMS and picking Existing Database instead:

Because it’s technically the same database as you’re source controlling in the state repository, it should all just work™ and should tell you there are no dev changes to the source. Then you’ll see the “Data” tab has been enabled:

3 – Select the same tables to source control as you did with SQL Source Control by using the Add Tables wizard:

BUT WAIT!!

shocked oh my god GIF

Isn’t it now going to generate a migration for our static data?? This isn’t included in the baseline or anything at all so far, so is it going to try and insert all of my static data into tables later on that already have it?

No. Actually we’re fine!

4 – Generate the static data migration script (and look at it for peace of mind). Notice that the script actually has checks in there – because we’re newly adding these tables, the migration will check if the tables are empty before trying to run the script, and only AFTER this migration will SQL Change Automation start generating the differential, incremental static data migrations:

Commit this migration script to your migrations repository, and that’s all we need to do here!

5 – All that’s left now, is to re-hook-up the Hybrid pipeline, follow the same steps you did before in Step 2 but this time, instead of an existing database, just link it back to the state repository like it was before. If you’ve done this right, you should see no changes pending for migrations:

BUT you will notice that if you change any static data with SQL Source Control, it should now show up in SQL Change Automation!

Change to static data prior to commit in SQL Source Control
Change before migration script generation in SQL Change Automation
Generated migration script to be committed

Conclusion

Is it an intended use? No absolutely not, the reason it’s disabled is that with all things at Redgate they are considered heavily to ensure users are offered the best possible user experience, functionality and essentially something that meets requirements across the board.

But. We can use it weirdly to, as i say, just make it work™.

What have you used weirdly lately? Let me know!

xRDBMS Database Continuous Integration with Flyway, Azure DevOps and Docker… the simple way.

“Some people try to make everything complicated, be the person who tries to make everything simple.”
Dave Waters

Simplicity is in my blood. That’s not to say I am ‘simple’ in the sense I cannot grasp more than the most basic concepts, but more that I am likely to grasp more complex problems and solutions when they are phrased in simple ways.

This stems from my love of teaching others (on the rare occasion it falls to me to do so), where I find the moment that everything just ‘clicks’ and the realization comes over them to be possibly one of the most satisfying moments one can enjoy in life.

shocked star trek GIF

Now recently I’ve been enjoying getting my head around Flyway – an open source JDBC based migrations tool that brings the power of schema versioning and deployments together with the agility that developers need to focus on innovation in Development. There’s something about Flyway that just… ‘clicks’.

It doesn’t really matter what relational database you’re using; MySQL, IBM DB2, even SAP HANA! You can achieve at least the core tenants of database DevOps with this neat and simple little command line tool – there’s not even an installer, you just have to unzip!

Now I’ve had a lot of fun working with Flyway so far and, thanks to a few people (Kendra, Julia – i’m looking at you both!) I have been able to wrap my head around it to, I would say, a fair standard. Caveat on that – being a pure SQL person please don’t ask me about Java based migrations, I’m not quite there yet!! But there is one thing that I kept asking myself:

“When I’m talking to colleagues and customers about Database DevOps, I’m always talking about the benefits of continuous integration; building the database from scratch to ensure that everything builds and validates…” etc. etc. so why haven’t I really come across this with Flyway yet?

think tom hanks GIF by The Late Show With Stephen Colbert

Probably for a few reasons. You can include Flyway as a plugin in your Maven and Gradle configurations, so people writing java projects already get that benefit. It can easily form part Flyway itself by virtue is simply small incremental scripts and developers can go backwards and forwards however and as many times as they like with the Flyway Migrate, Undo and Clean commands, so is there really a need for a build? And most importantly, Flyway’s API just allows you to build it in. So naturally you’re building WITH the application.

But naturally when you’re putting your code with other people’s code, things have to be tested and verified, and I like to do this in isolation too – especially for databases that are decoupled from the application, or if you have a number of micro-service style databases you’d want to test all in parallel etc. it’s a great way to shift left. So I started asking myself if there was some way I could implement a CI build using Flyway in Azure DevOps, like I would any of the other database tooling I use on a regular basis? Below you’ll find the product of my tinkering, and a whole heap of help from Julia and Kendra, without whom I would still be figuring out what Baseline does!

Option 1) The simplest option – cmdline

Flyway can be called via the command line and it doesn’t get more simple than that.

You can pass any number of arguments and switches to Flyways command line, including specifying what config files it’s going to be using – which means that all you have to do, is unzip the Flyway components on a dedicated build server (VM or on-prem) and then, after refreshing the migrations available, invoke the command line using Azure DevOps pipelines (or another CI tool) to run Flyway with the commands against a database on the build server (or somewhere accessible to the build server) and Bingo!

No Idea Build GIF by Rooster Teeth

And that’s all there is to it! You get to verify that all of the migrations up to the very latest in your VCS will run, and even if you don’t have the VERY base version as a baseline migration, you can still start with a copy of the database – you could even use a Clone for that!

But yes, this does require somewhere for Flyway to exist prior to us running with our migrations… wouldn’t it be even easier if we could do it without even having to unzip Flyway first?

Option 2) Also simple, but very cool! Flyway with Docker

Did you know that Flyway has it’s own docker image? No? Well it does!* Not only that but we can map our own version controlled Migration scripts and Config files to the container so that, if it can point at a database, you sure as heck know it’s going to migrate to it!

*Not sure what the heck all of this Docker/Container stuff is? You’re not alone! Check out this great video on all things containers from The Simple Engineer!

This was the method I tried, and it all started with putting a migration into Version Control. Much like I did for my post on using SQL Change Automation with Azure SQL DB – I set up a repo in Azure DevOps, cloned it down to my local machine and I added a folder for the migrations:

Into this I proceeded to add my base script for creating the DMDatabase (the database I use for EVERYTHING, for which you can find the scripts here):

Once I had included my migration I did the standard

Git add .
Git commit -m "Here is some code"
Git push

and I had a basis from which to work.

Next step then was making sure I had a database to work with. Now the beauty of Flyway means that it can easily support 20+ RDBMS’ so I was like a child at a candy store! I didn’t know what to pick!

For pure ease and again, simplicity, I went for good ol’ SQL Server – or to be precise, I created an Azure SQL Database (at the basic tier too so it’s only costing £3 per month!):

Now here’s where it gets customizable. You don’t NEED to actually even pass in a whole config file to this process. Because the Flyway container is going to spin up everything that would come with an install of Flyway, you can pass it switches to override the default behavior specified in the config file. You can adapt this either by hard-coding strings or by using Environment Variables alongside the native switches – this means you could pass in everything you might need securely through Azure Pipeline’s own methods.

I, on the other hand, was incredibly lazy and decided to use the same config file I use for my Dev environment, but I swapped out the JDBC connection to instead be my Build database:

I think saved this new conf file in my local repo under a folder named Build Configuration – in case I want to add any logic later on to include in the build (like the tSQLt framework and tests! Hint Hint!)

This means that I would only need to specify 2 things as variables, the location of my SQL migrations, and the config file. So the next challenge was getting the docker container up and running, which fortunately it’s very easy to do in Azure Pipelines, here was the entirety of the YAML to run Flyway in a container (and do nothing with it yet):

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

steps:
- task: DockerInstaller@0
  inputs:
    dockerVersion: '17.09.0-ce'
  displayName: 'Install Docker'

- task: Bash@3
  inputs:
    targettype: 'inline'
    script: docker run flyway/flyway -v
  displayName: 'Run Flyway'

So, on any changes to the main branch we’ll be spinning up a Linux VM, grabbing Docker and firing up the Flyway container. That’s it. Simple.

So now I just have to pass in my config file, which is already in my ‘build config’ folder, and my migrations which are in my VCS root. To do this it was a case of mapping where Azure DevOps stores the files from Git during the build to the containers own mount location in which it expects to find the relevant conf and sql files. Fortunately Flyway and Docker have some pretty snazzy and super clear documentation on this – so it was a case of using:

-v [my sql files in vcs]:/flyway/sql

as part of the run – though I had to ensure I also cleaned the build environment first, otherwise it would just be like deploying to a regular database, and we want to make sure we can build from the ground up every single time! This lead to me having the following environment variables:

As, rather helpfully, all of our files from Git are copied to the working directory during the build and we can use the environment variable $(Build.Repository.LocalPath) to grab them! This lead to me updating my YAML to actually do some Flyway running when we spin up the container!

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

steps:
- task: DockerInstaller@0
  inputs:
    dockerVersion: '17.09.0-ce'
  displayName: 'Install Docker'

- task: Bash@3
  inputs:
    targettype: 'inline'
    script: docker run -v $(FLYWAY_LOCATIONS):/flyway/sql -v $(FLYWAY_CONFIG_FILES):/flyway/conf flyway/flyway clean -enterprise
  displayName: 'Clean build schema'

- task: Bash@3
  inputs:
    targettype: 'inline'
    script: docker run -v $(FLYWAY_LOCATIONS):/flyway/sql -v $(FLYWAY_CONFIG_FILES):/flyway/conf flyway/flyway migrate -enterprise
  displayName: 'Run flyway for schema'

Effectively, this will spin up the VM in ADO, download and install Docker, fire up the Flyway container and then 1) clean the target schema (my Azure SQL DB in this case) and 2) then migrate all of the migrations scripts in the repo up to the latest version – and this all seemed to work great!*

*Note: I have an enterprise Flyway licenses which enables loads of great features and support, different version comparisons can be found described here.

So now, whenever I add Flyway SQL migrations to my repo as part of a branch, I can create a PR, merge them back into Trunk and trigger an automatic build against my Flyway build DB in Azure SQL:

Conclusion

Getting up and running with Flyway is so very very easy, anyone can do it – it’s part of the beauty of the technology, but it turns out getting the build up and running too, when you’re not just embedding it directly within your application, is just as straightforward and it was a great learning curve for me!

The best part about this though – is that everything above can be achieved using pretty much any relational database management system you would like, either via the command line and a dedicated build server, or via the Docker container at build time. So get building!

ready lets go GIF

Cloning From Home… a consideration.

“I know there’s a proverb which that says ‘To err is human,’ but a human error is nothing to what a computer can do if it tries.”
– Agatha Christie

Working from home is fine because we still have the right connections in place. Televisions to watch only the most stimulating and educational shows, telephones to get instantly in touch with those we hold most dear and crucially the internet to do everything we can to still be as (and sometimes more) productive at work as possible.

But it’s happening more and more now. We have to work from home and this is starting to turn up some problems.

And no. I don’t just mean that my wife is now acutely and accurately aware of how annoying I am.

Judge Judy Reaction GIF

I’m talking about clones. Specifically SQL Clone, and clones do not work great from home.

Now for anyone out there who has never heard of SQL Clone, where have you been?? It’s an incredibly intuitive, reliable tool for rapid provisioning of database copies (*cough* definitely not just lifted from the website) – in any case, it’s pretty darn cool.

One of the coolest things about this technology though is how it seamlessly plugs into the Microsoft Ecosystem by leveraging the VHD(X) technology available in x64 Windows. This means there’s no special file systems, no hard or software “appliances”, it’s very much plug and play (with a little tinkering around ports 14145 and 14146 of course!)

Naturally though it does come with it’s challenges, as does all technology, and it has been highlighted recently more so than ever.

The Problem

Whilst SQL Clone is a beautiful, elegant and easily automated answer to the database provisioning problem – there are 2 gotcha’s that you must be aware of when you start using the solution. SQL Clone relies on the relationship between the Image file (the centralized, virtualized parent, as it were that exists on a windows fileshare) and the Clones themselves (the 40mb diff disks on the hosts).

Now many people choose to put Clones onto their workstations, which for many of us is our laptops. We have SQL Server Developer installed and we pull down a few clones for different branches, for testing etc. etc. and all is well with the world.

When you’re in the office that is.

When a user queries or works with a SQL Clone and it requires any of the schema/data that was in the original copy and is not in the changes the user has made, a call is made back to the image file (the VHD mounted copy) to fetch it. When you’re in an office setting (with your cup of coffee in one hand and a colleague sat next to the other telling you about their weekend) this is fine because you’re connected directly to the company network and therefore the link between the clone on your laptop and the image file on the fileshare is short, strong and stable.

At home though this isn’t the case. Many of us work on VPNs that are “ok” on internet connections that can only be described as temperamental. So what happens when a clone tries to call back to the image file across this VPN, which is now much further away, across a sea of uncertainty and poor connection?

Bad things happen. Either the Clone cannot connect to the Image and it decides it no longer knows what it is, and it falls into Recovery Pending for a while until the connection is re-established, or if the connection is present it is just so slow.

sloth dmv GIF

This isn’t a fault of the tool, I hasten to add, it is just the nature of the technology. Much as we would expect our RDP sessions to be a little laggy if we were based in Greenland and the server was in Australia, it is just part of life.

So… are there solutions? You bet there are!

Option 1: The “jump box”

Many people I have worked with have found that they are still able to leverage SQL Clone whilst working from home by reducing the physical distance between the Clone and the Image, and have done so by introducing a Dev/Test “jump box”.

The way this works is by having an instance of SQL Server available within the company network onto which the Clones are provisioned.

This works great because it means that the link between the Clone and the Image is once again short and strong and stable, relying on the company network, but you can easily connect to or RDP onto this jump box if you need to work with them. Still using your VPN and internet from home? Check! Able to work with a Clone now though? Check!

Option 2: The Cloud

Welcome… to the world of tomorrow!

season 1 sb 129 GIF by SpongeBob SquarePants

By which of course I mean, the world that has been available to us for a while. Infrastructure as a Service (or IaaS) allows us to very easily spin up an additional dev/test server which can be used in the interim. Whilst this does incur a little extra cost in infrastructure it still means that you won’t need as much space on the VMs themselves, thank you SQL Clone!

As long as you have a fileshare available in say Azure or AWS and a Windows VM you’re pretty much good to go and once you’ve got Clones and Images up in the Cloud, you’re reliant on the networks of the providers, and I’ve got to say, they’re not too shabby at all!

Sneaky side note: ALL demos I give of SQL Clone actually run on an EC2 VM with a 91GB Image on a fileshare and it works great!

Option 3: Roll-your-own solution

Ok. I realize this one is not really a solution. But the thing we’ve highlighted here is that it’s all about the distance between the Clone and the Image. Those two love-birds cannot exist long distance.

So if you might have another ingeniously simple way of solving this, I would recommend having a read of this magnificent document written by an incredibly clever ex-colleague of mine and the “How It Works” documentation and then let me know what YOU come up with:

Links: Best Practices for SQL Provision and How It Works

Conclusion

There are a few ways of getting around this problem but they’re not always obvious or clear, and at this trying time, you just want solutions you have in place to work. We all do.

So if you are working with any of Redgate’s tools, or even if you just have a question that I or a colleague might be able to help with – please reach out to us, talking to us won’t cost you anything, but it sure as heck might gain you something!

Stay safe, stay well and have a great week!

Me on Twitter: @PlantBasedSQL
Redgate on twitter: @Redgate
Redgate Support Team: support@red-gate.com

I got to watch a team work fast and it was awesome!

“Unless someone like you cares a whole awful lot, nothing is going to get better. It’s not.”
Dr Seuss

I care a great deal about anything I work on, as I mentioned in my previous post here (ironically about not working for once), so it goes without saying that I am super invested in any customer I help to get up and running with any of the Redgate solutions, and none more so than Data Masker.

Ever since it became part of the Redgate family Data Masker has been an integral part of my workday – there aren’t many days where I don’t interact with the tool or it’s concepts in some way and when things go wrong, the tool breaks, something doesn’t work like it should, well not only is it less than ideal for me (showing me up) but it’s not delivering value to the customer.

Now that. I hate with a passion.

However, I’m lucky enough that once that happens, that is not the end of it – we don’t pack everything up and say “well… sorry all, that’s your lot.” No – I get to speak directly to our fantastic support team and the equally as fabulous and helpful development team directly behind the tool, and guess what? They care too. Immensely.

I can think of 2 key examples of this team working in the most incredible way, you wouldn’t even believe (well maybe you would), but it goes to show you what is possible, especially when you break down the silos in your organisation. I never became just “a ticket from a sales engineer”, and this is how they helped me fix 2 problems:

1 – UTF-8 encoding of strings for substitution rules

I was working closely with one of the Business Development folk (little side nod there to Kendra for saying folk so often I’ve started saying it) in Redgate’s sales team who were working with a potential customer in a country where Arabic is the primary language. As such, you would expect them to want to use Data Masker to mask Arabic names like اَمير‎ (“Amir” in English) into data sets, instead of something like “Frank”, which just doesn’t have the same ring to it.

It turns out that in the port across of Data Masker from it’s older v5.5 to the swanky new v6.0 (yes this was a little while back) the ability to change the encoding of strings from user defined data sets had been broken, which meant that the values from Data Masker weren’t being inserted correctly in the table, rendering all of their Arabic sets useless. This was a huge blocker to their trial, which was under time constraints anyway.

I reported this to the Data Masker team on the 7th February 2019 at 12:53pm, created a support ticket for reference at 2.03pm and had personally spoken to them by 3.00pm. The lead developer, support rep, product designer and myself quickly met up to discuss it and agreed that as this was a bug, broken functionality that should exist (and which could block not just this customer but any customer requiring it for other language sets) that they would down tools and work on a fix immediately.

By 9.00am the following day a fix went out the door. Built, tested, deployed. Who did they involve in the testing? Me, initially. Then they waited on feedback from the potential customer, who also confirmed it worked after upgrading.

Wow. That’s what I call fast!

2 – Time-out tennis

More recently (think November 2019) I was working with a customer of ours who I had built up a great relationship with – they were super friendly, super responsive and all round great to work with. Unfortunately as we were getting their masking sets set-up (pun definitely intended) we started encountering time out issues when waiting for masking stats to return.

This was a little irksome, as it was a slight dent in my relationship and credibility and was slowing them down, as it was causing sets to not complete at all.

The problem was though, it wasn’t such an easy fix as with Number 1 above, as it wasn’t exactly clear what was causing the timeouts and I wasn’t really sure what were the best places to check! Fortunately the “Masketeers” as they are more commonly known around Redgate towers, did have an idea of where to look. A nominated member of the development team (and to him I will be forever grateful) almost became a little subdivision of the team – it didn’t require their full might, just someone who knew even more intimately what was happening than me!

Through a few ‘back and forth’s with the customer, experimenting with timeouts and making some tweaks we were able to establish what was going wrong, and ultimately provide a fix. This work became a new branch which was merged into the main base after testing once again and was released the very next day. Finally, the customer let us know it was all working again and sets were completing as they should.

Conclusion?

Sometimes you have the pleasure of working with some unsung heroes where you work, I do it on a regular basis – from the facilities and cleaning team here in the building who do the most incredible work to look after us, to the Sales management team who are constantly looking at ways to make us the best possible company to deal with – Redgate is definitely a place where people can do the best work of their lives.

But on these occasions, I got to witness something special. Cross-functional collaboration. Communication. Empathy. Passion.

And just getting the work done. When it’s about delivering value to customers, feedback, development and testing are everybody’s job, and I’m lucky enough to work with people who put that theory into practice.