Classification by design: Including data cataloging in a DevOps process

“The most powerful tool we have as developers is automation.”
Scott Hanselman

It is no secret that I love to talk about data protection, specifically from the perspective of structured data. When we talk about database development practices, we often find ourselves talking about 3 things most often:

  • Source Control
  • Continuous Integration and Continuous Delivery/Deployment (CI/CD)
  • Estate Monitoring

Some people refer to this as “DataOps“, others refer to it as “DevDataOps” but in reality, it’s all DevOps guys. This may be an unpopular opinion (and if it clashes with yours please forgive me, it’s just my opinion) but just because a certain niche area hasn’t been specifically called out within a subset of DevOps doesn’t mean you have to invent your own term for it!

Now this leads me on to DevSecOps, or as I like to call it… More secure DevOps.

rimshot GIF

No but seriously this is a slightly different case – DevSecOps is like DevOps but fortified with security from the ground up. There’s a fantastic article and diagram of this on Plutora from Mark Robinson of how this looks (below) and if you haven’t read his article I would definitely go and give it a read!

DevSecOps Diagram

Good DevOps practice is a combination of different things working together, bringing the right mentality, the principles, processes and amazing tools at our disposal like automation but this all includes security from the ground up too. DevOps is about putting those principles and practices in place to strengthen the pipeline, so why don’t we treat security in the same way?

Take, for example, 3 pieces of legislation that have been very much in the spotlight:

The controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed.
GDPR (Europe) Art. 25 “Data protection by design and by default”

Processing agents shall adopt security, technical and administrative
measures
able to protect personal data from unauthorized accesses and accidental or unlawful situations of destruction, loss, alteration, communication or any type of improper or unlawful processing.

– LGPD (Brazil) Chapter VII, Art. 46 “Security and Secrecy of Data”

“A Controller or Processor is required to implement appropriate technical and organisational measures to demonstrate that Processing is performed in accordance with this Law…”
DIFC LAW NO. 5 OF 2020 (Dubai) Part 2D, Art. 14 (2) “Accountability and notification”

There’s a common running theme here and although lots of global legislation will either allude to, or directly tell you ways you can be compliant and what some of these “organizational” and “technical” measures are, it’s still pretty blurry.

How do we know what we can do? How do we know what “default” and “design” mean in this context? Well, we build it into the DevOps process.

Now I could sit here forever and talk about why transforming your database development, deployment and provisioning processes allows us to be more secure, but that’s a lot of material and it might have to come in chunks! So what we’re going to focus on today is as the title suggests: Data Classification and Cataloging.

Why is Cataloging important?

Cataloging structured data is incredibly important because it can be one of the first steps we take to securing sensitive Personally Identifiable Information (PII) or Protected Health Information (PHI) wherever it exists across our database environments. It allows us to make strengthened, contextual decisions about the data we hold including how we treat it in pre-Production, how long we retain it for and which systems and processes consume it.

But the most important part of this is simply: it tells us where the risk is.

Read through any of the most recent data protection laws and you will notice that a few things come up quite a lot including “Data Protection Impact Assessment“, or DPIA. Effectively if you can assess the risk of processing activities you can more readily answer the data protection questions and challenges you may face.

Knowing where your data resides can be the first step to helping you assess this risk, and to more readily answer your own data questions. If you want to read more about Cataloging specifically and why it is useful, you can read more about it on my previous blog here.

Where does Cataloging fit into DevOps?

This one is simple to answer. Once you have fully classified your entire estate, you’re not done. No, if you’re a development house or indeed even a single developer – if you are making any schema changes to the tables holding that sensitive data, you’re never done.

The reason for this is that Cataloging is an evergreen activity – if you update tables by removing columns, adding columns, splitting tables, adding tables… anything! Well then you need to be ready to make sure that you are:

a) Prepared and equipped with knowledge of the tables you’re working on and if this is a high risk activity.

b) Updating classification information to reflect the new “truth”, i.e. if you’re adding a column that will collect people’s Twitter handles, then that column should be classified as sensitive, and this should be reflected the moment it is deployed to Production.

So it is important to have the correct people working on this, with the right knowledge, preparation and processes and using the correct tools ensuring that those updates are persisted properly and securely through your deployment pipelines.

Huh… people, processes and tools… That sounds familiar!

The Process: SQL Data Catalog, SQL Change Automation and Azure DevOps

For this little experiment of mine I used Redgate’s SQL Change Automation (Migrations First approach in SQL Server Management Studio) and SQL Data Catalog to both develop & deploy and classify/categorize respectively, and for simple version control and orchestration of this pipeline I opted for Azure DevOps (with SQL Change Automation CI/CD plugins):

NOTE: Heads up, all of the code I used for this can be found right here in my GitHub, feel free to have at it!

Step 1) Setup SQL Change Automation project with an Azure DevOps Git Repo and then create the YAML file to build it, and create a deployment process to Acceptance and the Production.

Ok – that’s a bit of a mouthful and a massive ask. There’s a lot of things there you have to be familiar with, but we don’t have time to go into right here. BUT fortunately if you checkout Redgate University right here, my blog post on using Change Automation with Azure DB and Redgate’s Product Learning section, you’ll be a Database DevOps ‘Whizz-Kid’ in no time!

Fast forward a little and I had my example databases, VCS and pipeline all up and running:

Step 2)The “Theory”: This is where things get interesting. So we have an example pipeline set up and we are able to completely deploy all the way through to “Production” so let’s talk theory.

In SQL Data Catalog I have covered both my Production and Acceptance Databases:

Now, in development we don’t make changes directly to Production, so why should Classification be any different? Now how you adapt the above code is up to you, feel free to split it, move it around, incorporate it into Pull Requests if you want to… But I’m going with a bit more of a simple situation.

Situation: Developer makes a change in Development, which gets committed, reviewed and merged o the main branch, resulting in a build and a deployment, in this case to Acceptance and then it is later deployed to Production.

Now, by Acceptance we should only have the “good work”, i.e. all of our testing is shifted left within DevOps so Acceptance is basically the last stop before Production. Therefore we should classify the work we have done on Acceptance, crucially, before it gets to Production and starts gathering sensitive data, and then copy this classification up on deployment.

Ideal: We should have no columns on Production that have not been classified.

Step 3) – In Practice: Fortunately it’s very easy to automate a lot of these steps with SQL Data Catalog utilizing it’s PowerShell cmdlets and REST API. The cmdlets are fully documented and very easy to use (docs here). This allows us to easily scan, classify and copy classifications up to other databases, but we’ll also need to do some checks and report if there are discrepancies, as part of the deployment pipeline that can be investigated.

  • Are there any columns on Acceptance that aren’t classified but have been deployed to Production? (failure to comply with process)
  • Are there any columns on Production that have not been classified? (classification drift)
  • Are there any unclassified columns on Acceptance that have not yet been deployed to Production (for pipeline hygiene purposes)

The other part of this ‘fun’ is reporting what has been changed in the same process. Now fortunately SQL Change Automation spits out a Changes.json file with its Release Artifacts and we can steal that away and find out how many tables have been created or changed in this release and report that back so we can correlate what has been done and what is missing:

So actually getting this up and running is just going to require 3 things:

  1. The PowerShell script from GitHub (or your own personalized variant) as a step in your production deployment
  2. Data Catalog available and pointed at Acceptance and Production (or your versions of these environments)
  3. Variables set in Azure DevOps to fill the gaps (e.g. Where is Data Catalog? Whats my PowerShell Auth token? What are my Acceptance and PROD DBs called? etc.)

3 is the last step there so you’ll need something like this to run the script:

  • DatabaseDeploymentJSON – where the JSON file will be with the latest changes in the Prod release
  • DataCatalogAuthToken – Your PowerShell Auth token from Settings in Data Catalog
  • DataCatalogUrl – The full URL to your Data Catalog installation, missing the “\” at the end (ending :15156)
  • ExportPath (Optional) – I specified the path for my Database Deployment Resources to save typing it out in the Redgate plugins
  • ProdDB / StageDB – As you would expect, the Production and Acceptance/Staging DBs you’re deploying to/from
  • ProdInstance / StageInstance – As above, except the instance the Database are located on

In the variables above the Instance and DB names are purely used within Data Catalog, so there’s no need to worry about anything happening to the actual databases themselves!

Once you’ve run through the deployment pipeline a couple of times and the changes.json file is being produced, you can go ahead and copy the script into an inline PowerShell script step in your release and you should find it will fire to life! I simulated an example by modifying my Contacts table and my Articles table, adding 1 column each and deploying both to Acceptance. I then classified just 1 of these in Acceptance in Data Catalog:

and then approved the deployment to Production and tada!

Ok you probably can’t make all that out, but it effectively says:

(Information) Table dbo.Articles was modified in this deployment.
(Information) Table dbo.Contacts was modified in this deployment.

That much we knew!

1 column(s) with classifications were discovered on VoiceOfTheDBA Acceptance that are not classified in VoiceOfTheDBA Production:
dbo.Articles.TestingPineapple

Excellent, we classified that one so it gets copied up and we can verify that in data catalog against Production:

and finally, we get a warning about Production now containing unclassified columns:

(Alert) The following columns have been discovered on VoiceOfTheDBA Production that require classification:

dbo.Contacts.TestingPineapple

You should classify these columns in VoiceOfTheDBA Acceptance prior to the next deployment.

Just as we expected. Success!

Happy Tom And Jerry GIF

Conclusion

Classification and categorization belongs as part of DevOps, if you expect the context for your business decisions around data to remain evergreen and informed then it cannot sit on the shoulders of one or two people to support it, and it cannot live in a manually updated Excel sheet or document.

By including it within the DevOps process, not only do you add an additional layer of security but you also make it an automated, team activity that can be audited, checked and easily kept up to date.

Is this DevSecOps? Well… not really no. Is this a more secure approach to Database DevOps? Absolutely! Happy DevOpsing!

My wife and I started a food blog!

“Veganism is not a “sacrifice.” It is a joy.”
Gary L Francione

This is just a short one in the grand schema of things, you probably all know that I enjoy blogging but a lot of my posts end up quite long and rambling. This one however will be quite concise.

My wife and I started a food blog.

There. I said it.

She had been saying to me for some time that she wanted a place to take all the vegan recipes we make and to put them up on the interwebs for people to find and make and get as much joy out of as we do, so we finally did it. It stalled a few times and then eventually, like 2 weeks ago, we finally got the first couple of posts up.

If you’re interested in the sort of plant-based cooking we do at home, you’re looking to go plant based or just looking to help the animals out, you can find our blog right here:

https://TheSnugVegans.com/

We look forward to dining with you!

The Snug Vegans

(SQL) Change ALL the Azure SQL Database Automation!

“But I can hardly sit still. I keep fidgeting, crossing one leg and then the other. I feel like I could throw off sparks, or break a window–maybe rearrange all the furniture.”
Raymond Carver

I understand that starting off a blog about Azure SQL Database with the above quote is a little weird, but honestly I’m _really_ excited about what I’m about to tell you.

***Note before starting: This blog post assumes you’re familiar with the concepts of Database Source Control, CI and CD, Azure SQL Database and pipelines within Azure DevOps, otherwise here be dragons.***

I am a huge fan of SQL Change Automation – mostly because of the migrations functionality. In my mind it represents an ideal workflow for making complex SQL Server database changes. If you’re not sure about the different models (State, Migrations, Hybrid), take a look at my blog post from last week here! But until this time it has had one thing that I could not easily do with it… Platform as a Service, Azure SQL DB.

Now don’t get me wrong, SQL Change Automation could easily deploy to Azure SQL Database but I had a problem. The words:

Chris how do we benefit from the migrations approach and put the shadow database and build db in Azure SQL too? We don’t have any local instances or VMs we can use for this and Dev, Test and Prod are all in PaaS!”

elicited this response:

cry crying GIF

But. No. Longer.

Now for those of you who don’t know, the _SHADOW_ database that SQL Change Automation creates is effectively a schema and static data only copy of your database, and it is dropped and built each time you verify, to ensure that all of the migrations run successfully and you can effectively check your work and shift the build left (!!), before you even check into source control.

This shadow database and the build database shared one thing in common and that was that you couldn’t build them in Azure SQL DB, which left 2 choices:

  • Use an instance of SQL Server. Developer for the shadow locally maybe; a VM in Azure or on-prem hosted instance for building
  • [For build specifically] Use localDB. Not advisable if your database contains any objects not supported by localDB because (juuuust in case you didn’t know) it is SQL Server Express.

But on May 12th 2020 (and I only found out about this like 2 weeks ago) the SQL Change Automation team at Redgate released version 4.2.20133 of the plugin for SSMS which included a few super cool things like additional Azure SQL support and the Custom Provisioning Scripts feature.*

excited excitement GIF

Now this is great because not only can we now easily create SQL Clones to be used as the development source (and I’ll blog about THAT a little later) but of course you can use it to use an Azure SQL DB for the shadow AND to use a persistent Azure SQL DB for the CI build as well!

Now unfortunately Kendra kinda beat me to the punch here and she produced a fabulous 3 part video series you can watch on using SQL Change Automation solely with Azure SQL DB, and you can view those here if you don’t want to see me try it out:

Getting set up

The first thing I did was make sure that I had all of the necessary environments to try this out – I created 3 Azure SQL Databases to mimic Development, Build and Production environments on 2 separate Prod and Non Prod Azure SQL Servers. I ran the DMDatabase prep scripts (you can find these here) to setup Dev_Chris and Production, but left BuildDB empty.

Next It was time to create my project, so I hopped over into Azure DevOps and created a new project, initialized it with a README and then Cloned it down onto my local environment:

Everything was ready to go so it was time to create my project!

Setting up SQL Change Automation in SSMS

*cough* or if you’re me, update it first because you’re on a REALLY old version *cough*

Then I hit “Create a New Project” and it allowed me to just specify the connection string to the Dev Azure SQL DB and the project location was the checked-out local repo:

Didn’t change any of the options because I’m a rebel and I didn’t feel like filtering anything out! But of course now comes the fun bit… the baseline. I chose my production Azure SQL DB as it’s my only upstream DB at this point, and it’s time to hit “Create Project”.

…and Huzzah! It’s worked and we’re all good!

excited andrew garfield GIF by The Academy Awards

Now… that’s actually not the best bit! The reason why Andrew there is clapping so hard? Well that little piece of magic has happened in the background! A Shadow database has actually been created for me against my azure server automatically! This is done by using the connection string that is used for dev!

Now… one thing to check, and I didn’t think to do this, but you can specify the connection string in the SQL Change Automation user file but I just left mine for a bit not realizing it created an Azure SQL DB for the Shadow that was CONSIDERABLY higher tier than my dev environment (bye bye Azure credit!), but fortunately I was able to scale it down quickly to basic and that has stuck, but be warned!

So I did what all ‘good devs’ would do now… I committed and pushed my initial commit directly to my main branch! (Don’t tell my boss!)

and safely sat my Database in Azure DevOps:

Setting up the build and deployment stages

This bit was actually just as easy. I used to hate YAML but thanks to a certain (wonderful) Alex Yates I jumped in anyway and it turned out to be just fine!

I created a new basic YAML file within Azure DevOps (and used the assistant to just auto populate the Redgate defaults, if you don’t know YAML or what it can do already, there’s a really good MS article here) and committed it to the main branch again (whoopsie) and the only component was the SQL Change Automation plugin I pulled in from the Azure DevOps marketplace, and I configured the build to target my “nonprod” server and the Build DB I had created previously.

On saving and running the pipeline succeeded!

All that was left to do was to create a Release Pipeline. So naturally, I jumped straight in and created a new pipeline, and I started with an empty job and called it Production*note* make sure you also choose your Build artifact before configuring your release stage too by clicking the Add an Artifact option!:

I added the SQL Change Automation: Release step to the agent job (note because this is all hosted, I’m using an Azure DevOps hosted agent to do this step):

Now you’ll need to add 2 stages (both the SQL Change Automation: Release plugin) at this point, a “Create Release” and a “Deploy from Database Release Artifact” because one will look at the target and figure everything out for you, and you’ll be able to review exactly what will be deployed, and the other will actually _do_ the deployment:

From here you just have to specify the options available, like in this wonderful walk-through here from the fabulous Chris Kerswell of DBAle fame! For me, this was simply targeting my Production Azure SQL Database.

You’ll definitely want to use the project variables to pick up the right package, and also leave the export path blank in both steps for now:

You can Clone the step by right clicking instead if you want to which will preserve all the connections you’ve already provided! Then once it’s all pointed at the right place, save and queue the release!

And of course, we were successful:

and then finally with a couple of triggers set to automatically build and deploy I made a change to my Contacts table in my Azure SQL Dev DB and a few minutes later, thanks to Azure DevOps and Redgate SQL Change Automation the very same change appeared in Production, with no reliance on anything other than Azure SQL DB and SQL Change Automation:

Before the DevOps process on Dev, ready for a migration to be generated
After: Automatic post-build deployment of the new column to the Production Azure SQL Database

Conclusion

If you have all of your databases in Azure SQL Database**, fear not because SQL Change Automation to the rescue! You can very easily set up and configure a pipeline in Azure DevOps or indeed any pipeline of your choice, but it’s never been easier to persist development changes all the way through to Production in a low risk, incremental, “DevOps” way!

—NOTES—

*An important word from the release notes: Note that it is still generally recommended to locate the shadow database locally where possible as that will usually result in a faster database connection. The default CreateDatabase.sql and DropDatabase.sql scripts can be altered to improve performance or implement custom provisioning logic.

**If you have all of your Databases in Azure and you need them masked for Dev/Test too, check out this previous blog post in which I outlined how to do that using Azure DevOps too!

Which database source control model works best for me?

“Destiny is not a matter of chance; it is a matter of choice. It is not a thing to be waited for, it is a thing to be achieved.”
William Jennings Bryan

For many people, figuring out how to get their development database into source control is the first step to a robust, repeatable, automated (and exciting) database DevOps pipeline. This, coupled with exactly which technology (Azure DevOps, Github, GitLab, BitBucket… the list goes on) you’ll be using for Source Control (and later CI/CD) can make it quite overwhelming.

overwhelmed choices GIF

Now fortunately I’ve worked with a number of teams on setting up source control methodologies and some work better than others depending on how you want to work. Remember:

“DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.”Donovan Brown

And the key there is equal parts, whilst technology has a part to play, it comes down to the teams; nurturing and feeding a positive mindset of collaboration and communication within the team and then defining which methodologies and processes work best for you.

Once you’ve got that down, pick the source control methodology that works best for you, and luckily there are 4 choices:

  • State-First
  • Migrations-First
  • Hybrid- or Optimized-Model
  • Other

Ok… maybe I lied about the 4 choices because other can encapsulate many many different options in itself. But, what I’m going to talk about below are the 3 primary options I see development teams adopt and how they fit into your teams culture.

If you’re already tired of reading then you’re in luck! I also talked about this same topic at Redgate Streamed on 28th May 2020 so if you follow that link you can “register” to watch on demand, I will tell you in your own ears! (As opposed to reading below) – I won’t tell you to just watch my session because you should DEFINITELY check out the sessions also given by Kendra, Grant, Ben and Frank which were… well:*

Lets Go Yes GIF by Music Choice
*(SOOOOOO Good!)

State-First Approach

The state-first approach is, as it would suggest: the state of each object within the database is captured by whatever tool you use, i.e. the script needed to CREATE that object, and it is written out into its own flat file (most often a .sql file) in version control. The actual structure of these files and folders can vary by technology but largely it will follow a logical structure and the bottom line will be a create script per object.

When an update is made to that object, a newer version of that same create script is generated and it is added as a newer version of that script in version control and that is the latest version of the database which we can then deploy. When using state-first we have no alters, only creates, so it will be necessary to do a comparison at a later stage to work out the difference, and by extension the update/alter script that will be needed to propagate changes to later stage environments.

The benefits of the State-First method include (but are not limited to):

  • A simple approach to get started with standardizing development practices: It’s aligned with the practices we already have in place on the application development side, where source control has been standard practice for years.
  • Easy on-boarding for teams in the ‘Shared development model’: When every developer is forced to share a single development environment it can be quite hard to ensure that developers are keeping work separate; most tools that enable you to work in this model allow you to ‘lock’ objects at the database level as you work on them, or who exactly made each change that might be committed.
  • Easier to roll-back to previous state: Rollbacks are a pain with databases, but there are times where they are necessary. Maintaining a full history of the state at any given time makes it easier for us to compare and rollback environments to a state that we know worked well.

The drawbacks though of the State-First method include (but are not limited to):

  • Not as easy to achieve small, incremental deployments: Because we’re reliant on the state of the database at each stage there is still a certain element of overhead that is attached to each deployment.
  • Upgrade script determined at a later stage: Lots of people like to know EXACTLY what changes will be deployed and HOW against target environments, but because of the above reason, we’re reliant on approving changes early on, but only truly seeing how it will be deployed later in the pipeline, which doesn’t give us the same reliability or peace of mind.
  • Not as easy to refactor complex table changes: The State-First method is “How did it look to begin with and how did it look at the end?” so it doesn’t take into account the nuanced steps that may have been involved, which can be problematic when you’re adding a NOT NULL column to a table that has existing data, so these sorts of complex changes might require additional pre- and post-deployment scripts.

Migrations-First Approach

The Migrations-First approach differs significantly from the State-First approach because, as it would suggest, it relies on migrations to identify the version of the schema across environments and they usually rely on guids, numbering conventions, checksums and others to keep track of the schema, normally within a log table of their own on the affected schema itself. The migrations often come in the form of .sql files that have been written or generated and there are lots of different types, but they can be boiled down often to the idea of Repeatable, Versioned and Undo Migrations (see here on the Flyway site for a more in depth summary of these types)

The migrations then, actually contain the changes as you would like them to go out; many believe that (after testing) the script they have written is as it should be deployed, and that is exactly what is then being run against each stage. Now naturally, you need to build ON something, if you have an existing database, so many technologies will offer some sort of baselining option, to understand what already exists and what the incremental migration scripts are deploying to.

The benefits of the Migrations-First method include (but are not limited to):

  • Enables small, frequent, incremental migrations and predictable deployments: Everything is just that tiny piece of work you did, specifically. That means that only what you need to go out will go out; only what was approved at Pull Request time. This gives us high confidence that we’re sending the right changes to Prod.
  • Ideal for environments with high up-time requirements: There’s no heavy state to check, we’re just migrating these tiny changes, which means there’s far less chance of causing huge overhead on Production at deployment time.
  • Ability to use your own custom standards and code for table changes in deployments: No script generation or the ability to edit generated scripts is one of the greatest capabilities of this model. For complex changes, the steps to achieve this we KNOW that work are included, and not only that, the scripts are commented and formatted and easy to understand with our company standard, making it easy to keep track of what has been deployed.

The drawbacks though of the Migrations-First method include (but are not limited to):

  • Not as easy to pick and choose changes to be deployed: If a developer has captured multiple changes within the same script, but we only want to deploy a subset of those changes, or we don’t want to deploy to a subset of those objects right now, then it’s really hard (almost impossible) to try and unpick these changes, this also makes testing certain changes in isolation tough!
  • Higher learning curve for teams: This method is neither as easy to adopt nor as intuitive as the State-First approach, which means developers need to get used to writing their migration scripts, ensuring they’re properly formatted, commented, tested, numbered and where necessary, the undo script for those changes. This results in a much higher ask for the team; the cost for gaining the predictability of deployments.
  • Harder to roll-back changes: _On those very same undo scripts then_ they have to be absolutely perfect. It’s still much harder to undo, especially if we’re trying to undo migration 5.0.1 when we’re already on 6.1.2, everything has to cascade neatly if you’re carrying out multiple undo’s and having a water tight undo strategy is hard to nail down.

The Hybrid / Optimized Model

This particular model is a rare one to find because it is not offered widely, but where it is achievable it can offer the benefits of both the State- and Migrations-First models.

As the name would suggest, it is a combination of the state and migrations approaches into a single Hybrid model; developers store the state of their database in source control, allowing them to easily rework their changes and commit multiple times to their working branch as they develop the “end goal”, and then from this same location once those changes are confirmed, pushed and ready to go, the relevant migrations are generated from the latest state.

Now this model can be adapted into lots of different workflows: developers can all generate their own migrations from their state and check them in together when they’re happy. This records a granular history of each change that was made and how it applies to each object, and is easy to work with, and then the migration contains just what needs to go out from all of that work. Another option would be having developers make the changes and check these into a DB State folder in source control, and then having more experienced developers or DBAs etc. generate the respective migrations from the state, knowing that they have a greater confidence in the SQL specific changes that are captured in the script. This is nice because it gives cause for another pair of eyes, which again gives greater confidence in what ultimately gets deployed.

The benefits of the Hybrid / Optimized method include (but are not limited to):

  • Full granular history around object changes on a state level, but with customization, flexibility and reliability of migration scripts: Know exactly what has changed, when and by whom, but don’t worry that you don’t know exactly what change will be deployed.
  • Separation of duties for Developers and Senior Team Leads / DBAs (who generates what / who has what specialty) and a lower learning curve for developers: Easy for developers to make changes quickly and easily without having to worry about the “nitty-gritty” and exactly what SQL will be needed. Gives DBAs and senior developers peace of mind that changes are ultimately adopted and improved by people who _know_ the database.
  • Easily extends existing state-first model where migrations are needed: State-First is a great choice 70% of the time but there ARE times where data migrations or complex changes are needed. This method includes these changes where needed, instead of relying on pre- and post-migration scripts, which run globally every single time.
  • Easier to pick and choose changes to go out: Because we can choose which changes to which objects are going out in the migration scripts, it’s easier for us to grab only the ones we want to push out each time, like an additional “cherry pick” layer within the development process.

The drawbacks though of the Hybrid / Optimized method include (but are not limited to):

  • Additional step added to the process can make it feel like red-tape / added work: In some cases teams may wish to make changes and get them out _fast_ as part of continuous deployment, and could be doing so hundreds of times per day. This model can get in the way of that because it adds an additional layer of dependency.
  • Could add some time to the overall development process for new changes: This is almost exactly the same as the above reason. More steps to include, more people to include, slightly less automation than we would like _perhaps_ so naturally time to deploy increases slightly (but arguably is offset by greater confidence in the change? I’ll let you decide!)
  • Duplicated schema model in Source Control repository: Some tools keep a copy of the schema in source control as reference for the migrations, others don’t. In either case, you’re maintaining two versions of a repository, which many say should be the single source of truth, if these two are even slightly out of sync, who are we going to believe? This model calls for discipline, as sloppiness can destroy all of the proposed benefits.

aaaaaaaaaaaaaaaaaand… breathe!

Slow Down Reaction GIF by True and the Rainbow Kingdom

Conclusion

There are lots of different models you can adopt for the source controlling of your database and changes, in this post I’ve outlined 3 (well… 2 and a half really) but whatever you’re looking to adopt, hopefully this will give you greater confidence in adopting the right one.

Have a wonderful week!

Azure DevOps Masking a.k.a “point, no click”

“[My] kids haven’t responded to my GDPR requests so I don’t think I’m legally allowed to tell them when dinner’s on the table.”
@mrdaveturner

Ah masking. You would have thought I’d be sick of it by now, no? No, fortunately now, more so than ever, I find myself answering question after question and tackling use-case after use-case. So when I was asked this week:

“Chris, is there a way for us to call Data Masker for SQL Server directly from Azure DevOps?”

I thought to myself, well that sounds easy enough… and it was! I know what you’re thinking, c’mon Chris, surely there is more to it? But no, it’s actually pretty straight forward!

I pointed them at the PowerShell module and cmdlets for SQL Provision and the Azure DevOps plugin to automate all of their Provisioning and Masking process, thinking all the while “pffft, they could have made this harder!” and then…

“No sorry Chris, is there a way for us to call JUST Data Masker for SQL Server directly from Azure DevOps?”

Ah! Now that’s an interesting one!

#1 Figure out where you want Data Masking to run in your process

This empty Azure deployment stage looks good enough for now! If you wanted to chain other processes either side of it, that’s cool too! Maybe you have your own provisioning process in place and you want to point Data Masker at it to sanitize it? Makes sense to me! For now I’m going to stick with a single agent job for simplicity.

#2 Figure out what is actually going to run Data Masker

Data Masker is a client install and as such will need to be installed on a *gasp* actual machine!

No but seriously, any server you have lying around, physical or VM will do the trick as long as it meets these requirements. Now this Server/VM will need to have an Azure DevOps agent on it already, which of course is the ideal candidate for being the “thing” that calls Data Masker – this could be the Staging/Non-Functional/Pre-Prod environment also of course, so you could copy down PROD and then immediately invoke masking.

#3 Call the command line from Azure DevOps

In your pipeline steps you can specify the calling of an executable on the machine where the agent resides. Fortunately Data Masker has a wonderful command line available that you can call, you can read all about it here: https://documentation.red-gate.com/dms/data-masker-help/general-topics/about-command-line-automation

The PARFILE you could of course dynamically replace with variables so that it only calls the relevant parameter file for that particular database as well, a nice benefit!

My PARFILE just simply looked like this:

It was calling a local Data Masker set “AzureFun” – now the thing to bear in mind is that Data Masker will run with the Windows authentication credentials that are being run as by the Azure DevOps agent, unless you specify otherwise. In this case because the Azure DevOps agent has the correct permissions to update the databases on this instance anyway I’m fine to use Windows Authentication:

Conclusion

It’s very easy to simply call the command line of Data Masker for SQL Server directly from Azure DevOps, does this same approach work from other CI/CD tools? If they can call executables on the target server then absolutely! So it’s very easily included in the process – you just have to think about where Data Masker is installed and what credentials you’re using for it!

Bonus Point – what about if it’s all Azure SQL Database?

You had to do it didn’t you, you had to say it!

“But Chris, now we know we can call this all from Azure DevOps, what if we wanted to mask and copy Azure SQL Databases into Dev/Test etc.?”

Well actually the good thing is, it’s also pretty similar! When you’re connecting Data Masker to an Azure SQL DB you only need to specify this in the connections in the controller. Again, authentication will likely have to be SQL Auth at this point, and you need to be in Cloud mode, and I’d recommend setting the connection timeout to 10s rather than the standard 5s, but it can still be called as normal from the PARFILE:

So the Data Masker element is reasonably straight forward – that’s the good news. But the thing you REALLY need to stop and think about is:

Where are our Dev and Test copies going to BE?

Option #1: If they’re going to be on VMs or local dev and test servers / developer machines then you could follow a similar approach to one I laid out in this blog post for Redgate in which you create a BACPAC file and split it out on premise before importing it and then provisioning from there. And you could use this code in my Github to achieve something very similar. Caveat: I am no PowerShell guru, who do you think I am? Rob Sewell? Chrissy LeMaire? No. Sadly not. So you can build your own logic around my code though, have at it, I don’t mind! ^_^

Option #2: Keeping everything in Azure. You can copy databases around in Azure and it seems to work pretty well! So I wrote this PowerShell (also in my GitHub for y’all) to effectively copy a PROD DB into the same resource group, mask it and then copy it across to a Dev/Test resource group, dropping the temp copy so as not to incur lots of extra Azure costs (this is just one of the methods I’ve seen people use, again it’s up to you!) – again, see the caveat in option #1 above for my statement on PowerShell! The good thing is, you can use the ‘&’ simply from PowerShell to call Data Masker’s command line.

Either of these options can be run from Azure DevOps also as part of your provisioning or working processes, but instead of including a call to the command line, you can run a fun PowerShell script instead:

Second Conclusion *sigh*

There are lots of ways to get what you need into Dev and Test, but these copies should be masked if they contain personal, identifying information. There are some methods above but there are plenty of others out there on the internet and if you’re not sure about getting started with data masking; try my post here – happy masking!

Reusable Schema Deployments with SQL Source Control

“That’s who you really like. The people you can think out loud in front of.”
John Green

Can we all take a moment to appreciate how awesome Kendra Little is? No really, go on over to her Twitter or something and remind her. Because not only is she a genius but she brings out the best in a lot of folk, and I don’t think she gets enough credit – so my quote above today is for her!

This is one of those times though, where we stumbled on an idea, and together we fleshed it out and thanks to her ingenuity and straight up desire to help we ended up with a full on video about it! Thank you Kendra! So if you don’t want to sit here and read about Reusable Schema Deployments, take a look at the video below instead where we cover everything from the key differences between State and Migrations, what a filter file is and how to use YAML in Azure DevOps!

For those of you who prefer a nice read, grab a coffee or tea and a biscuit (cookie) and read on!

The Problem

We find ourselves in the unenviable situation where we have a Production database that is delivered to customers to support different applications that we provide to them. When a new customer comes on board and chooses any number of our products, we then deliver them along with a copy of the database containing a set of objects that are specific to their setup.

Example. We produce 3 applications for our customers; Beep, Derp and Doink. In the database that we supply with these applications, we have corresponding schemas ‘Beep’, ‘Derp’ and ‘Doink’, as well as ‘dbo’ which holds a number of objects common across all instances of the database.

The question then is: “How do we deploy only the combinations of these schemas to the relevant customers AND make sure there is as little down time as possible?”

I mean, besides magic, of course!

shia labeouf magic GIF

Solution 1: Less Work, More Schema

There’s a reason why, when you buy a new ERP or CRM system for your company, many times you will receive ALL of the schema! Just bought in a new system to help manage your General Ledger, Accounts Payable and Accounts Receivable, and those are the only modules you’re going to use?

Whooooo-boy-howdy you better believe you’re going to get schema objects for Asset Management, Billing and Risk Management too!

The reason for this is that it is much easier to deliver. It is a single package that needs to be deployed everywhere and if the customer already has the relevant objects in the database then it is MUCH easier to just turn on corresponding application functionality that starts to populate and use those objects!

The problem is, if EVERY customer get’s EVERY object change across EVERY schema… well then it’d be a lot of changes and potentially some quite big changes that could impact our customers, perhaps unnecessarily. This could easily be an argument to be made for the migrations approach to source controlling and deploying changes, but that’s one for another day!

Solution 2: You get a filter, and you get a filter, EVERYBODY gets filters!

oprah lol GIF by Amy Poehler's Smart Girls

In the state based way of deploying we can actually use filter files (.scpf) which allow us to filter at the point of creating a database release artifact. This is a game changer because that means we can have the convenience of developing against a single source database in Dev, source controlling ALL object changes together and it’s only once we actually get to the point of deploying to the customer do we include the filter file in the Create Release Artifact step to include ONLY the necessary schema objects that are relevant to them.

Now this is also a great way of doing it because it means that everything in source control is still our single source of truth and we’re able to validate that everything builds together and we can run unit tests broadly against all objects etc. however it does also mean that we have to either maintain separate filter files for every customer, or for every combination of our Schemas that a customer could receive and update them as and when people add or remove certain applications. It also doesn’t give us any validation that THIS particular release artifact that has been created for that customer actually works independently from the rest of the schema objects and therefore we’re deploying something that hasn’t actually been tested in isolation first!

Finally, the secondary problem with this approach is that it is SLOOOOOOW. Like super slow. This is because the heaviest part of the state based database deployment process is the creation of the release artifact determining what changes should be included in the release that is going out of the door and this is being carried out and putting overhead independently on every. single. customer. Not fun.

Solution 3: Reduce, Reuse, Recycle.

If we take a step back from this problem and look at exactly what we’re trying to do and what we want to do. We want to deliver ONLY the necessary changes to a particular schema in the database that supports that specific application.

But this means that there is a commonality across customers – if for example we assume that we have 30 customers that have a variation of the Beep schema (I’m going to ignore dbo for now because everybody has that), they may also have Derp, or Doink or no other schemas, but the point is all 30 of those customers will require the exact same updates to their Beep schema full stop.

This means, if we can generate a single artifact once, that can be used for all 30 customers receiving this schema, or indeed ANY schema, then we can:

a) Reduce the amount of comparisons taking place to create release artifacts
b) Reuse and Recycle the same release artifacts for multiple customers
c) TEST these artifacts before they actually go out the door!

This is effectively achieved by adding an additional layer on top of the development and deployment process:

An additional step is introduced to produce a single reusable artifact prior to the final Prod deployment, Pre-Prod all receives the same package which contains every object regardless of schema, however when a Production release needs to go out the release artifact is built against the Beep database (in this case, which only has the Beep and dbo schemas) so the pain of the creation of the artifact actually sits outside of the customer environment AND is created only once, allowing us to now distribute that change to any customer who many require it to upgrade their Beep schema.

The same is done for each schema in turn which means we then deploy the fast reusable artifacts, and the only process change required is the step immediately before deploying to Production, almost like the independent databases are an exact mirror.

Don’t get me wrong there are challenges with this model as well.

What if we want to deploy a completely new database with a set number of schemas? Well. You may have to do an ad-hoc deployment or add an additional process to the pile which does that create for you!

What if we create versions 10.4.1, 10.4.2 but these only make it up to one of these Pre-Prod mirrors and then we want to push 10.4.3 to Customer Production environments? We will no longer receive the artifacts from .1 and .2, only .3! Here you will have to create the specific filtered artifacts ONLY prior to deploying to Production so that EVERY change is captured. In the video above I used a Golden DB which had every deployment on it and used this to test my schema specific deployments prior to deployment but it depends on what setup you want to adopt.

Conclusion

Filters are incredibly powerful and if you have subtle differences in hosted environments across your customers or even across your own DBs they can be an ingenious way of being able to keep all core objects AND key variations within Dev, version control and Pre-Production but then making sure that the target ONLY receives what it needs.

But be aware, subtle variations can snowball and you do have to be careful how you handle them as it is not very easy to scale to multiple customers. Fortunately in this scenario 1 schema was mapped to 1 application being delivered which makes it easy to determine who gets what, but the more differences you have, the harder they will be to continuously integrate and deliver.

And FINALLY (that’s right I’m going to stop talking), if you want to read more about this model from Kendra’s perspective, she also wrote about it! You can read her account here: https://littlekendra.com/2020/04/08/make-database-code-reusable-in-sql-source-control-with-deployment-filters/

Have a great week, stay safe and stay well!

Cloning From Home… a consideration.

“I know there’s a proverb which that says ‘To err is human,’ but a human error is nothing to what a computer can do if it tries.”
– Agatha Christie

Working from home is fine because we still have the right connections in place. Televisions to watch only the most stimulating and educational shows, telephones to get instantly in touch with those we hold most dear and crucially the internet to do everything we can to still be as (and sometimes more) productive at work as possible.

But it’s happening more and more now. We have to work from home and this is starting to turn up some problems.

And no. I don’t just mean that my wife is now acutely and accurately aware of how annoying I am.

Judge Judy Reaction GIF

I’m talking about clones. Specifically SQL Clone, and clones do not work great from home.

Now for anyone out there who has never heard of SQL Clone, where have you been?? It’s an incredibly intuitive, reliable tool for rapid provisioning of database copies (*cough* definitely not just lifted from the website) – in any case, it’s pretty darn cool.

One of the coolest things about this technology though is how it seamlessly plugs into the Microsoft Ecosystem by leveraging the VHD(X) technology available in x64 Windows. This means there’s no special file systems, no hard or software “appliances”, it’s very much plug and play (with a little tinkering around ports 14145 and 14146 of course!)

Naturally though it does come with it’s challenges, as does all technology, and it has been highlighted recently more so than ever.

The Problem

Whilst SQL Clone is a beautiful, elegant and easily automated answer to the database provisioning problem – there are 2 gotcha’s that you must be aware of when you start using the solution. SQL Clone relies on the relationship between the Image file (the centralized, virtualized parent, as it were that exists on a windows fileshare) and the Clones themselves (the 40mb diff disks on the hosts).

Now many people choose to put Clones onto their workstations, which for many of us is our laptops. We have SQL Server Developer installed and we pull down a few clones for different branches, for testing etc. etc. and all is well with the world.

When you’re in the office that is.

When a user queries or works with a SQL Clone and it requires any of the schema/data that was in the original copy and is not in the changes the user has made, a call is made back to the image file (the VHD mounted copy) to fetch it. When you’re in an office setting (with your cup of coffee in one hand and a colleague sat next to the other telling you about their weekend) this is fine because you’re connected directly to the company network and therefore the link between the clone on your laptop and the image file on the fileshare is short, strong and stable.

At home though this isn’t the case. Many of us work on VPNs that are “ok” on internet connections that can only be described as temperamental. So what happens when a clone tries to call back to the image file across this VPN, which is now much further away, across a sea of uncertainty and poor connection?

Bad things happen. Either the Clone cannot connect to the Image and it decides it no longer knows what it is, and it falls into Recovery Pending for a while until the connection is re-established, or if the connection is present it is just so slow.

sloth dmv GIF

This isn’t a fault of the tool, I hasten to add, it is just the nature of the technology. Much as we would expect our RDP sessions to be a little laggy if we were based in Greenland and the server was in Australia, it is just part of life.

So… are there solutions? You bet there are!

Option 1: The “jump box”

Many people I have worked with have found that they are still able to leverage SQL Clone whilst working from home by reducing the physical distance between the Clone and the Image, and have done so by introducing a Dev/Test “jump box”.

The way this works is by having an instance of SQL Server available within the company network onto which the Clones are provisioned.

This works great because it means that the link between the Clone and the Image is once again short and strong and stable, relying on the company network, but you can easily connect to or RDP onto this jump box if you need to work with them. Still using your VPN and internet from home? Check! Able to work with a Clone now though? Check!

Option 2: The Cloud

Welcome… to the world of tomorrow!

season 1 sb 129 GIF by SpongeBob SquarePants

By which of course I mean, the world that has been available to us for a while. Infrastructure as a Service (or IaaS) allows us to very easily spin up an additional dev/test server which can be used in the interim. Whilst this does incur a little extra cost in infrastructure it still means that you won’t need as much space on the VMs themselves, thank you SQL Clone!

As long as you have a fileshare available in say Azure or AWS and a Windows VM you’re pretty much good to go and once you’ve got Clones and Images up in the Cloud, you’re reliant on the networks of the providers, and I’ve got to say, they’re not too shabby at all!

Sneaky side note: ALL demos I give of SQL Clone actually run on an EC2 VM with a 91GB Image on a fileshare and it works great!

Option 3: Roll-your-own solution

Ok. I realize this one is not really a solution. But the thing we’ve highlighted here is that it’s all about the distance between the Clone and the Image. Those two love-birds cannot exist long distance.

So if you might have another ingeniously simple way of solving this, I would recommend having a read of this magnificent document written by an incredibly clever ex-colleague of mine and the “How It Works” documentation and then let me know what YOU come up with:

Links: Best Practices for SQL Provision and How It Works

Conclusion

There are a few ways of getting around this problem but they’re not always obvious or clear, and at this trying time, you just want solutions you have in place to work. We all do.

So if you are working with any of Redgate’s tools, or even if you just have a question that I or a colleague might be able to help with – please reach out to us, talking to us won’t cost you anything, but it sure as heck might gain you something!

Stay safe, stay well and have a great week!

Me on Twitter: @PlantBasedSQL
Redgate on twitter: @Redgate
Redgate Support Team: support@red-gate.com

5 Awesome family-friendly vegan lock-down recipes

“One cannot think well, love well, sleep well, if one has not dined well.”
Virginia Woolf

Working from home these past couple weeks has been, well… different.

At this point we’ve gotten used to working next to one another on the dining room table (HUGE shout out to my wife for putting up with me), we’ve blasted through a lot of movies across Disney+, Netflix and Amazon Prime (we’re well stocked) but most importantly… we’ve cooked.

Now this blog was never intended to be a food-y cook-y recipe blog by any means, stay tuned there because there is something much more exciting in the works in that arena!!!

Excited Minions GIF

However there have been some recipes that have just been an absolute blessing to have in these times. The ingredients for them are reasonably simple, can be substituted fairly easily where necessary and they are thoroughly nourishing (and normally last us a good period of time too!)

  1. Making Thyme For Health – Chickpea Frittata

I really enjoy this frittata, we use the base recipe for the “egg” replacement mixture and it’s delicious – a variation we make in our house though involves cooking off 2 sweet potatoes in salt, pepper, oil and some smoked paprika, then adding red bell pepper, and Violife Vegan ‘Feta’, and instead of a skillet, using a baking dish. Honestly discovering this recipe was one of the best finds I ever made – yum yum yum!

Good for: Saturday or Sunday mornings as it is easily thrown together, and the recipe is easily doubled so you can save the cold leftovers (which are also great) for a quick lunch/brunch later in the week

2. The Happy Pear – (Oil Free) Vegan Enchiladas

This one is self-explanatory. It’s wonderful, warming, quick, easy and delicious and if you don’t already follow the Happy Pear where have you been??? Check this one out because we are already looking forward to making it again. We didn’t have sweetcorn so we subbed in Kidney Beans and we added nutritional yeast to the cashew ‘cheese’ for an extra cheesy recipe 😉

Good for: A very quick and healthy midweek dinner, fed 3 but could easily feed 4 or even 5!

3. Amuse Your Bouche – Vegan Slow Cooker Tagine

This combination of Sweet Potato and Chickpeas should be illegal, it tastes JUST SO DARN GOOD. I have nothing else to add other than dig out your slow cooker and get going!

Good for: Any weekday dinner, the amount made will easily server 4 over two nights if served with rice or couscous etc. especially handy because you prep it and leave it in the slow cooker whilst you work. Minimum hassle.

4. Buddha Bowls! (Examples here)

Ok this one is a cop out. Buddha bowls are by far the easiest thing to make on this list because you basically throw a bunch of things in a bowl and it’s nourishing, delicious and a bit of fun! The basic “formula” for a Buddha bowl though can be seen here on Eating Well.

Good for: Fast, healthy meals that can be changed up or perfected to your taste. There are so many permutations that there’s no real limit to what you can have. Really great also when you have young-ish kids who want to help and can pick and assist with the cooking of the various ingredients!

5. The Busy Baker – Oreo No Bake Chocolate Mousse Cheesecake

So we missed off a treat that you can make easily, everything else just seems a bit… healthy. Well this is still kinda healthy but comes with a beautiful, rich and very easy to devour chocolate filling that involves very few ingredients. All you need is a fridge and a short list of ingredients to create a deliciously indulgent vegan dessert.

Good for: Treats, birthdays, celebrations even under lock-down, anniversaries, romance or simply because it’s Tuesday. This dessert can be made by pretty much anyone on the go, and you can make small versions in little pots too if you want to spread the goodness out, or keep little ones busy!

So, these are my top 5 lock-down / isolation recipes that are easy to make, easy to have fun with and VERY easy to eat! What’re your go-to recipes?

Stay safe, stay healthy and stay happy!

Status: Working from home

“What you stay focused on will grow.”
Roy T. Bennett

As of Monday 16th March 2020 I have been a remote worker.

Ever since I left University I have worked in busy, bustling offices where the air runs thick with collaboration, questions and social commentary.

But now… I’m at home. I guess you could say I’m not quite sure how to come to terms with this. It certainly is strange knowing that my commute in the morning has gone from 50 minutes on the bus to 50 seconds from bedroom to dining room (via the kitchen for coffee). It’s even stranger though that I don’t get to see colleagues; friends, who I’ve worked with for years and who’s smiling faces, cheerful demeanor and indomitable spirits have been huge contributing factors to my desire to tackle the working day and make as much of it as i possibly can.

In the past remote working was simply the odd days I worked here and there from home, with nothing but my laptop because it was a quieter day when I could do some learning as well as some other life task like going to the doctor or dropping the car off to be serviced – therein lies the problem.

I had come to associate working from home with quieter periods of time, when I could focus on 1 thing at a time, maybe work from the sofa and treat myself to the occasional snack.

As of Monday however, the entire office has been locked down and we are _all_ working from home… for how long? I don’t think anybody knows. The only certainty in the world right now is that nothing is certain – so it is time to put into practice something that I always preach, but rarely am pushed to exercise. I will have to change my mindset.

I’m not alone in this at all – you may be reading this thinking “but Chris, working from home is so easy, I actually get more done and I’m more focused!” and if that is the case I applaud you, and I wish I could share those same abilities. However if you’re like me and suddenly working from home by directive has jarred you, here are my top 3 tips for working from home that have helped me get to grips with it and maintain my productivity:

1- Find a routine

This is perhaps the most important on my personal list because in the past I have found myself getting out of bed at 7.30/8am on days where I worked from home. This is ~2 hours after I would normally get up when commuting into the office and whilst this sort of lay in is great at the weekend to get some rest, it also leads to me being groggy and not fully awake when your start your morning meetings/calls/work and doesn’t help you focus or build a reasonable mental list of priorities.

Take aspects of your normal daily routine and replace them so that you build up a Monday-Friday (for example) that represents something similar to the structure you enjoyed when working from the office. Here is an example of how I have changed my schedule to adapt to my new working situation; I would normally catch the bus to the office which would take anywhere between 40 and 50 minutes first thing in the morning. Now at exactly the same time in the morning I go for a 30-40 minute brisk walk to simulate that commute – match something you did to something creative, compelling, healthy or otherwise you can do from home instead.

The thing to bear in mind is that finishing work for the day should also factor into your plan. It is very very easy when one works from home, to simply leave the laptop open or code running or join a late call. These things will rapidly eat into your personal life though so once you hit your magic “clocking off” time… clock off.

2 – Create separation between work and home

Even though they are now one and the same, you have to keep a separation between your work and home lives. I’ve heard it described by many people that they have set up in the study or in the office – but what do you do when you live in a small flat? Or if you live with your parents who are _also_ working from home and you have to say, work in your bedroom?

My solution to this, as I have a similar problem, is to make certain touches to transform myself and the space i’m working in to make it feel as though there is a transition between going from home to work.

Once I have come back from my walk I relax and make a coffee and some toast or cereal whilst watching YouTube videos, then at 7.55am precisely, my Amazon Echo buzzes an alarm, I turn the television off and I begin “building” my work space. A process which involves taking out the monitor and accompanying cables, setting them up on the dining room table, neatly arranging where everything should go and plugging in my phone and finally ensuring that I have a full bottle of water on standby to stay hydrated.

Conversely at the end of the day I will go through the same ritual in reverse to the point where we are all able to sit around the dining room table and have dinner with no trace that I was ever there working. This act of transformation somehow makes the space seem different, and gives it a different energy, which means i’m not thinking about work problems whilst I eat with my family.

3 – Communicate, Communicate, Communicate

Ok. This one is a no-brainer, but there are subtle levels to communication that can really help when you’re feeling the weight of working from home hovering above you like so much foreboding.

Whenever you’re feeling the pinch of distraction or you’re lacking motivation, message a colleague you would normally speak to on a daily basis at the office and ask them how they are. It seems simple but if you rotate who this person is and just check in on them you make them feel though about, cared about and appreciated. Not only does this give them the motivation to carry on but is a selfless act that can also revamp your own spirits and the resulting conversation can even inspire the thoughts you need to help with whatever you’re working on.

There are so many posts about working from home and engaging with others on a technical and personal level, such as Kendra’s post on SSC, Kathi’s post on SimpleTalk or this incredibly detailed and insightful post from Alice Goldfuss, that many of these will be repeats on what others have said. However, I think my personal top 3 on communication specifically are:

  1. Turn your webcam on – be seen and see other people and enjoy the smiles and the thinking and the body language you would otherwise be missing out on.
  2. React and be reacted to -if you use Teams or Slack or any kind of collaboration tool, make sure that when people make statements or ask questions, you either offer feedback or at least react. A thumbs up emoji at least lets people know that you are listening, allowing them to feel validated and supported and not think they’re just shouting into the void.
  3. Learn to use a mute button – we have all been in situations where someone has mouth breathed the meeting to postponement or someone has had a coughing fit or their child(ren) have come in asking questions – all of these are perfectly fine because we’re humans… but. That is no excuse for everyone in the meeting internal or external to hear that. Liberally use your mute button on your meeting software or headset and make sure you are able to contribute meaningfully, but that when you’re not contributing, you’re also not hindering others from doing so.

Conclusion

Right now is a difficult time. Very much so – and I hope you’re all doing well and staying healthy and happy. Working from home makes up a small portion of what everyone is feeling and struggling against at this point in time and so we should remember to be mindful and grateful that we even are a part of companies who are able to support us working from home as the ability to even do so is a blessing.

Remember to take your mindset and find a way to turn each of the things you struggle with into positives; recognize them, appreciate them and ask yourself why you’re feeling that way, and then find a way to deal with it that makes you more productive and/or happier – most companies have a wealth of people who are experienced in or who are dedicated to help you working from home, don’t be afraid to ask for help if you need it and see if someone can help you tackle some of the problems you’re facing. Everyone is different and will need different things, just because you don’t feel comfortable working from home, for any reason, doesn’t mean you’re doing things wrong. We’re all learning together.

So stay safe, stay healthy and stay awesome and I’ll leave you with my favorite tweet so far on the matter:

P.S. We also just recorded a DBAle Special Episode on working from home, where all podcast participants were working from their respective homes, if you’re interested, you’ll be able to listen on Spotify, Apple Music and directly here: https://www.red-gate.com/hub/events/entrypage/dbale when it goes live in the next few days 🙂

Shall we begin? (With data classification)

“If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.”
Albert Einstein

So I just spent about 20 minutes trying to come up with a suitable title for this blog post, and then it struck me – one of my favorite movies of all time (which I will soon be ensuring my wife and I watch again) is Star Trek into Darkness, featuring the magnificent Benedict Cumberbatch, in which he masterfully growls the phrase “shall we begin?”.

shall we begin benedict cumberbatch GIF

This sums up perfectly where people find themselves at the beginning of their classification activities. “Where do I start?” is usually the first question I get asked – regardless if you’re using an excel sheet, Azure Data Catalog or Redgate SQL Data Catalog to carry out this process, you will find yourself in the same place, asking the same question.

Classifying your SQL Server Tables and Columns is not straight forward, I’ll say that up front – you have to be prepared for whatever may come your way – but give yourself a fighting chance! Whether you’re looking to better understand your data, protect it, or you’re just hoping to prepare your business to be more able to deal with things such as Subject Access Requests (SARs), the Right to be Forgotten *cough* I’m looking at _you_ GDPR *cough* – or even just key development in systems containing sensitive information; this is the ultimate starting point. As per my blog post on data masking here, you can’t protect what you don’t know you have.

This is my effort to give you the best possible start with your classification process, whether this feeds into a wider data lineage process, data retention or, of course, data masking. So… shall we begin?

Get a taxonomy set up

This is perhaps the most crucial part of your success. Even if you have the best classification process in the world it really means nothing if you’ve basically described your data in one of say, 3 possible ways. The thing to bear in mind before getting started is that the data cataloging process is not specific to one job.

You may think at this point in time that you’re going to use it to highlight what you want to mask for Dev/Test environments, or maybe it’s your hit list for implementing TDE or column level encryption – but this _thing_ you’re building is going to be useful for everyone.

  • DBAs will be able to use this to help them prioritize systems they look after and being more proactive when it comes to security checks or updates, backups etc.
  • Developers will be able to use this to better understand the tables and environments they are working on, helping them contextualize their work and therefore engage and work with any other teams or individuals who may be affected or who may need to be involved.
  • Governance teams and auditors will be able to use this to better understand what information is held by the business, who is responsible for keeping it up to date and how it is classified and protected.

The list goes on.

So all of the above will need to be engaged in a first run to help actually describe the data you’re working with. What do you actually care about? What do you want to know about data at a first glance? Below is the standard taxonomy that comes out of the box with Redgate’s Data Catalog:

Some of my favorites are in here, which I would encourage you to include as well! If nothing else, having Classification Scope as a category is an absolute must – but I’ll come to this soon. You can see though, how being able to include tags such as who owns the data (and is therefore in charge of keeping it up to date), what regulation(s) it falls under and even what our treatment policy is in line with any of those regulations is, gives us so much more to go on. We can be sure we are appropriately building out our defensible position.

Having a robust Taxonomy will enable you to not only know more about your data but to easily communicate and collaborate with others on the data you hold and the structure of your tables.

Decide who is in charge

This seems like an odd one, but actually one of the most common questions I get is about who will be carrying out the classification process, and this is where the true nature of collaboration within a company is going to be absolutely critical.

Some people believe that a DBA or a couple of developers will suffice but as you’ll see later on, this is not a simple process that only 1 or 2 people can handle by themselves. Be prepared to spend hours on this and actually the implementation of classification means by nature you are going to need a team effort in the first instance.

You will need representation from people who know the database structure, people who know the function of the various tables and people who know the business and how data should be protected. You will require representation on this team and the collaboration between Dev, DBAs, Testers, Governance and DevOps, and you will need someone central to coordinate this effort. When you have key representation from these teams, it will make it easier to identify and collaborate on hot spots of data, so ensure you have this knowledge up front.

Get rid of what doesn’t matter

You may be surprised that the next step is technically an execution step, but it is an important point nonetheless and will absolutely help with the classification effort. This is where the Classification Scope category comes in, and this is why it’s my favorite.

One of the biggest problems that people face when actually executing on their classification is the sheer enormity of the task. There is no “average” measure we can rely on unfortunately but even small schemas can be not insubstantial – recently, some work I did with a customer meant they provided me with just ONE of their database schemas which had well in advance of 1800 columns across dozens of tables. When you scale that same amount to potentially hundreds of databases, it will become rapidly clear that going over every single column is going to be unmanageable.

To start then, the knowledge brought by the team mentioned above will be invaluable because we’re going to need to “de-scope” everything that is not relevant to this process. It is very rare to find a company with more than 50% of columns per database which contain PII/PHI and even if you are one of those companies, this process can help you too.

There could be many reasons that something shouldn’t be included in this process. Perhaps it is an empty table that exists as part of a 3rd party database schema, such as in an ERP or CRM solution. It could be a purely system specific table that holds static/reference data or gathers application specific information. Regardless what the table is, use the knowledge the team has to quickly identify these and then assign them all with the necessary “Out of Scope” tag.

This will not only help you reduce the number of columns you’re going to need to process significantly, but will give you greater focus on what does need to be processed. One of the greatest quotes I’ve heard about this process comes from @DataMacas (a full on genius, wonderful person and someone who over the years I have aspired to learn as much from as possible) who referred to it as “moving from a battleships style approach to one more akin to minesweeper“. Which is just so incredibly accurate.

In my example database below with only 150 odd columns, using the “Empty Tables” filter, and then filtering down to system tables I know about, I was able to de-scope just under half of the database, just as a starting point:

Figure out patterns and speed up

Many of the people carrying out this process will already have some anecdotal knowledge of the database as I’ve already mentioned, but now it’s time to turn this from what _isn’t_ important, to what is.

The fastest way to do this is to build up some examples of column naming conventions you already have in place across multiple databases – there will likely be columns with names that contain things like Name, Email or SSN in some format. Both SQL Data Catalog from Redgate and Microsoft’s Azure Data Catalog have suggestions out of the box that will look at your column names and make suggestions as to what might be sensitive for you to check and accept the classification tags.

Now these suggestions are incredibly helpful but they do both have a reduced scope because they’re just matching against common types of PII, so it’s important to customize them to better reflect your own environments. You can do this fairly easily, one or both of the following ways:

1 – Customize the suggestions

In Redgate’s SQL Data Catalog you can actually, prior to even looking at the suggestions and accepting them, customize the regular expressions that are being run over the column naming convention to actually check that they are more indicative of your own schemas – either by editing existing rules or by creating your own rules and choosing which tags should be associated with the columns as a result:

You can then go through and accept these suggestions if you so wish, obviously making sure to give them a sense check first:

2 – POWER ALL THE SHELL

In both of the aforementioned solutions you can call the PowerShell API to actually carry out mass classification against columns with known formats – this will allow you to rapidly hit any known targets to further reduce the amount of time spent looking directly at columns, an example of the SQL Data Catalog PowerShell in action is the below, which will classify any columns it finds where the name is like Email but not like ID (as Primary and Foreign keys may, in most cases, fall under the de-scoping work we did above) with a tag for sensitivity and information type (full worked example here):

Finally – get classifying

This is the last stage, or the “hunt” stage. It’s time for us to get going with classifying what’s left i.e. anything that wasn’t de-scoped and wasn’t caught by your default suggestions or PowerShell rules.

You can obviously start going through each column one by one, but it makes the most sense to start by filtering down by tables which have the highest concentration of columns (i.e. the widest tables) or the columns that are specifically known as containing sensitive information (anecdotally or by # of tags) and classifying those as in or out of scope and what information they hold, who owns it and what the treatment intent is at the very least.

The approach i take in this instance is to use filtering to it’s utmost – in SQL Data Catalog we can filter by table and column names but also by Data Type. Common offenders can be found with certain NVARCHAR, XML or VARBINARY types, such as NVARCHAR(MAX) – to me, that sounds like an XML, JSON, document or free-text field which will likely contain some kind of difficult to identify but ultimately sensitive information.

Following the NVARCHAR classification I move on and look at DATETIME and INT/DECIMAL fields for any key candidates like dates when someone attended an event or even a Date of Birth field. This helps especially when the naming conventions don’t necessarily reflect the information that is stored in the column.

Finally, one thing to add is that you will need access to the databases or tables at some point. You can’t truly carry out a full-on data classification process purely against the schema, especially for the reason above. Data will often exist in places you weren’t aware of, and knowing the contents, format and sensitivity of the data can only reasonably be found and tagged if you have the support of the data team to do so.

Conclusion?

This is not a one time thing. The initial classification is going to be the hardest part but that can be mitigated if you follow some of the processes in this post and ultimately work as a team.

Classification though, is ongoing. It is there to be an evergreen solution, something that provides context for any data governance or DevOps processes that you have in place and therefore should be maintained and treated as what it is. The place that everyone is able to use to gather information about what they will be using, and what the company might have at risk.