“Where do I even begin with data masking?” Getting started in 3 steps.

“I like to begin where winds shake the first branch.”
Odysseus Elytis

I’ve already covered data masking a little bit with a few of my first posts on this blog, and for many it will not be new to them. Performance tweaks and determinism are just par for the course in masking data, approaches to solving a well defined problem. But for many, the challenge is not how they mask their data. You’re free to write scripts, PowerShell or use a COTS software solution like Data Masker and the end result is going to be a set of data which you can securely work on; knowing that if you are going to suffer a breach of that data, it will be useless to those who get their hands on it.

The biggest problem here though, is getting started.

That phrase in itself suggests what kind of problem data masking is – anything in this life should be easy to at least get started at, with it becoming more and more complex the more you get into it. One can write a Hello World program quite simply in any major programming language but to start creating rich, functional graphical software, you’re just going to have to learn, write and test (repeat infinitely).

But data masking is like playing ‘capture the flag’ without an opponent team. Capturing the flag is a straight-forward enough objective, but if you don’t know where the flag is, how big it is, what it looks like, how many flags their are etc. then your job is going to be impossible if you just jump straight into it. Like all things in this life, data masking takes careful thought and planning to understand the scale you’re dealing with – and you need to make sure you don’t close your scope too soon by focusing on one system or set of data.

So here are the 3 broad steps I would recommend when starting to protect data in non-production environments:

1 – Catalog your structured data

I cannot emphasize this point enough. Before you do anything, catalog the heck out of your estate. Use anything you like and whilst i’d recommend approaches such as this or this, even an excel sheet is a start. You must know what you hold, everywhere, before you can begin to make decisions about your data. Perhaps the goal of this was to mask the data in pre-Production copies of your CRM system? Well, data doesn’t just exist there in isolation… or maybe it does! You don’t know until you have a record of everything. Every column, every table, every database, every server. There should be a record, tagged with a reasonable value to indicate at the very least the following 4 things:

  1. What system it is used in
  2. Who is in charge of the stewardship of it (i.e. who should be keeping it up to date / ensuring compliance)
  3. How sensitive it is
  4. What kind of data it is

Obviously other values can be specified, like your treatment intent (masking, encryption, both etc.) or your retention policy for specific fields, even something like identifying how the data moves in-out-and-around your estate (data lineage) but at the very least it needs to highlight where the hot-spots of sensitive data exist and what is most at risk. From this you can derive the insight needed to start masking data.

Notice this step is before any specific masking steps, that is because we should have a company wide resource to reference our structured data regardless.

2 – Scope your attack, identify the ‘hit points’

Once you have a record of everything you can begin with the systems that are going to be the most at risk / which databases you should start with. As part of the new project you’re working on, you may need to mask multiple tables or even databases but the size and scale of this process is currently stopping you proceeding.

Identify the key tables or even databases you will need to start with – it is so rare to come across databases where sensitive information is perfectly evenly spread across every table (I’m sure there are some, of course, but many will have concentrations of data) – these ‘usual suspects’ will be in contact, account, address, payment info tables etc. and where we have multiple tables with duplicated information, we want to select the biggest behemoths for this task. The reason for this is that instead of trying to mask all tables equally, you’ll want to mask the biggest sources of truth and then “fan out” your masking, linking them back to the masked tables. Not only is this process quicker but it also allows our masking to be consistent and believable. Draw out diagrams top we can keep the actual masking to a minimum (i.e. just these hit points) and then identify where you can fan this out to. Use these almost as you would stories in development.

This approach keeps the effort of writing scripts of configuring masking rules to a minimum but keeps the impact high – preventing us from wasting time trying to mask a table which, for instance, may only hold Personally Identifiable Information by correlation with other data. Ultimately, once you have this (now hopefully smaller, more focused) list, it’ll be easier to define how you want the masked data to appear.

3 – Write. Test. Validate. Record.

Once you have an understanding of your estate and an understanding of what you’re going to be masking, it’s time to start getting some indicative masked values mapped across (names for names, dates for date of birth etc.) but this is not a big bang approach.

Just like software development this process is best done iteratively. If you try to write one masking script or rule set you will gain 2 things:

  1. A thing that only you know how it works
  2. A thing that only you can maintain

Start with a copy of the database. Restore it to somewhere locked down that only a very select few have access to. Build the first part of your script to reflect the first of the ‘stories’ you identified in 2). i.e. “Mask the names to be something believable”. Does this work?

No, why not?
Yes, perfect.

You should even ask for feedback on the masked data from developers, they will be working with it so it makes sense to ensure you collaborate with them on what they would like to see.

So it’s time to record what you’ve done, just as anyone would do with any other software process. If only there was some way for us to do this… oh wait!

PUT IT IN SOURCE CONTROL.

It doesn’t matter if you’ve written a script or created part of a Data Masker masking set, it should now be put into source control, properly commented, and stored so that it is easily accessed, updated and implemented. Grant Fritchey actually wrote a great article about doing this here!

Now build up those rules, tackle your stories and gradually try to create enough to mask each of the ‘hit points’ and then fan them out to the secondary tables. Once you have kept this effort minimal but high impact, you’ll be able to try this in earnest. Once you have tackled your high risk targets then you can add stories to add specific test cases or oddities required in Dev and Test.

The point is to start somewhere, like I said, getting started is the hard part, but once you know what you have, where it is and how it’s structured, how you actually mask the data is a breeze.

New Years Resolutions: Be more good.

“Work on your strengths, not your weaknesses. How many of your New Year’s resolutions have been about fixing a flaw?”
Jonathan Haidt

On New Years Eve just after midnight, my wife asked our teenager and I what our New Year’s resolutions are and that got me thinking. I’m famously very very bad at this.

Like. Really bad.

My resolutions in the past have always been about fixing a flaw, which is why I like this quote and why this year I wanted to change things up a bit. My wife’s advice, naturally because she is a superstar and very good at setting herself goals, was to set S.M.A.R.T goals (more on those here, pretty good for meetings!) so that by the end of the year I could measure success.

But I’m going a slightly different direction with 2020. It’s a big year and I would like to change for the better this year, but I should point out early that I’m happy. I’m happy with how I am, who I have in my life and my work. Contentedness is not to be feared, it doesn’t mean that you have just accepted a fate and you’re going to sit back and let it happen, and it’s not to be underestimated because it means that you can more easily fight the stresses of everyday life. I realize I need to lose some weight, but that’s not really a NYR, it’s a lifestyle change down purely to how I eat/drink and move.

So after some thought, here are my 3 New Year’s resolutions:

1 – Spend more quality time with those who mean the most to me

I have the most amazing family. My wife is incredible; people meet her thinking I’m nice and then they’re like “Ooh Chris is an OGRE by comparison!”. Our teenager is wonderful; she’s smart and driven and just generally going to smash 2020, of that I’m sure. My sisters, mum & step-dad, dad & step-mum, best friend and my close group of friends enrich my life and without them I would be no where near as happy and fulfilled as I am today.

I want to make sure I prioritize time with all of them when making decisions, but that’s not to say I’m going to be able to spend a lot MORE time with them necessarily – I will certainly try, but I want the time spent with them to be quality time. That means putting my phone down more often, listening better and offering help where it may be needed, making a conscious decision to work on my relationships and to think more about what I’m saying and doing to try and make their lives as wonderful as they make mine.

So this decision isn’t SMART in the sense it’s not really measurable. But it’s a conscious choice I’ve made and one I have already started to work on.

2 – Do more of what I love / be good to me

This may seem in direct conflict (or indeed the same as) the previous resolution, but it is in fact very different. The goal of this resolution is to take some of the time that was otherwise “dead zone”. You get 168 hours in 1 week – take away 56 hours for sleep at least, 40 hours for work, 10 for commuting and 2 hours a day (14) for sundries like showering, getting ready, food prep etc. and what are you left with? 48 FULL hours! What do I even DO with that?

I don’t even know.

But it’s time that changed. I’m going to do more of what I love, like blogging, cooking, walking, playing video/board games, learning Romanian and spending time with friends / calling loved ones. From now on, I’m going to walk in the door from work, or wake up on weekends and I’m going to ask myself how long I have before bed and how I want to utilize that and it’s going to be something that ultimately makes me smile.

Again, not super measurable, but a concerted effort to choose to be happier is going to do a world of good for me and everyone around me.

3 – Be a better human

Ok, this one is kind of an obvious one but I feel if you don’t specifically address it and call it out then it’s almost like you’re not taking accountability for it.

I’ve made concerted efforts over the past year to be vegan, primarily on the grounds of trying to lead a cruelty free lifestyle, but the one animal I want to help this year more so than ever (and we jolly well need it) is ‘people’.

As we enter 2020 we enter an era of unprecedented change. The climate is at crunch point, Brexit is causing a huge amount of uncertainty and people are struggling to be happy, healthy and loved.

So this point is a very simple one. I want to take every opportunity to:

a) Make a climate conscious decision – greener choices from removing single use plastics to ethical, sustainable foods and beauty products.

b) Volunteer for and donate to more charities who are helping the planet or helping those in need and don’t necessarily have the ability to help themselves

c) Make someone smile. Simple as. Try to spread some of the happiness wherever possible.

This is perhaps the simplest resolution in terms of scope, as every action is easily definable, but it’s going to the the hardest to enact. So that’s why this isn’t a SMART goal, because if I manage to do at least a little bit of the above, certainly more so than I have done in previous years, then I’m moving in the right direction and adding a little positivity and sparkle to the world.

So those are my resolutions

I decided I don’t really want them to be measurable because I don’t want to compare me as I am to my ideal version of myself. The moment I compare me to how I wish I was, I feel regret or shame for not achieving that ideal, two things no person should feel, because we should focus on our triumphs. If you have struggled with shame in the past, I highly recommend this video from Brené Brown.

Happy New Year! So. What’re your New Years resolutions?

A brief history of PlantBasedSQL

There is no fundamental difference between man and animals in their ability to feel pleasure and pain, happiness, and misery.”
Charles Darwin

Note: The post below is my viewpoint on going Vegan and is not designed in anyway to attack or criticise anyone for the choices they make. I will not describe in depth what I witnessed during my research into making this choice for myself but I will provide optional links at the end of the post if you wish to start looking into Veganism. Thank you.

In December 2016 I went vegetarian. I had been living with my then-partner, now-wife for about 9 months and things were going great. When we got together though, I was a vehement meat-eater, in fact eater of all things animal; meat, dairy, eggs, you name it.

I even remember arguing with one of our friends over Christmas in 2015 that, and I quote, “I can’t see myself ever NOT eating meat. Ever.”

My wife though, at that time, was mostly pescatarian and therefore we never really had or cooked meat at home. I love to cook so from the moment we moved in together into a tiny (TINY!!!) flat in March 2016 I saw it mostly as a challenge that I could rise to, to cook more vegetarian food; so I started doing some research.

What I found horrified me to my core.

Many of the vegetarian and vegan bloggers I started to check out included (as part of their blogs and recipes I was following) justification for their lifestyle, reasons why they chose a vegetarian or plant based lifestyle and I was intrigued. I checked out the references, the sources and studies and documentaries, I made notes and discussed my thoughts with my wife and family and others I knew who were veggie or vegan and realized I had lived a life in ignorant bliss of the suffering that took place to fulfill my need for a burger, or bacon, even sweets like wine gums (which I loved but are full of gelatin).

So I made the switch and honestly, it shocked everyone around me (particularly my family) that I, the meat eater, the lover of BBQ, meaty curries and Tex-Mex, would give it up for the rest of my life. But everybody blamed my wife for this. Perhaps blame is too strong a word though… they attributed it to my now living with a mostly-vegetarian.

But no, I came to this conclusion myself. From pictures of slaughterhouses, caged animals and intense farming of everything from cows to pigs to fish, I realized that I would never see meat in the same light again, and it’s not as though I didn’t KNOW this happened. When I was a meat eater of course I knew this was the case, but when I really looked, I realized that my personal dinner preferences should never, ever cause something like this.

The research continued and just over 2 short years later on 1st January 2019 I did something great, I tried Veganuary. Veganuary is a vegan-January challenge that asks only that you give up animal products and try eating and living a Vegan lifestyle.

At this point the teenager in our house had already been vegan for some time; she had come from a predominantly meat-eating country (Romania) and so at home we were mostly cooking plant-based so we could all eat together anyway! Curries, stews, soups, pasta, pizza, nut-burgers, salads, buddha bowls, our diet was not restrictive – but i was still eating eggs and cheese at work, and when we went out for dinner. It wasn’t long before the articles and documentaries led me to look at the dairy and egg industries.

Again, absolutely terrifying.

Shortly before trying Veganuary, in September or October 2018 I had a nightmare. I won’t go into detail but involved trying desperately in vain to free cows from a dairy-slash-slaughterhouse and it was harrowing. I woke up completely drenched in a cold sweat and decided that it would not be long until I completely phased out all animal products, and January was the time to do so.

I don’t miss cheese, or eggs. I thought it would be hard, but it wasn’t. Yes, vegan cheese isn’t quite there yet (unless you’ve tried the new Applewood UK Vegan cheddar OHMIGOSH) but honestly, even if the alternative isn’t there yet – it’s still better than the version requiring we first exploit a living being that doesn’t have the means to defend itself.

Now, almost 12 months later, I still maintain that (besides the decisions to marry my wife, to look after our teen and to join Redgate) it was one of the very best decisions I have ever made.

I encourage you, if you’ve ever been curious, to try it for yourself. It’s surprisingly easy, but most of all it gets you to think about what you eat, how you fuel yourself and about the well-being of all life on the planet. If you need some resources, or want to answer some common questions, I’ve included some resources below:

Where do I get my B12 and Protein? Watch the Game Changers Netflix Documentary – trailer here: https://www.youtube.com/watch?v=iSpglxHTJVM

Is a plant based diet healthy? Watch the Forks Over Knives documentary on Netflix or Youtube – trailer here: https://www.youtube.com/watch?v=DZb-35oV_7E

How would eating vegan help to stop animal cruelty? Watch the Earthlings and Cowspriacy documentaries – trailers here: https://www.youtube.com/watch?v=Hm7Babs_FJU and https://www.youtube.com/watch?v=nV04zyfLyN4

Where are some good resources for plant-based cooking? You can follow the below leaders in this arena (there are loads on YouTube in general though):

Where can I eat Vegan? There are tonnes of good places that have their own vegan menus and options, a few chain restaurants in the UK who offer great vegan alternatives inlclude:

  • Bella Italia
  • Pizza Express
  • Frankie & Benny’s
  • Zizzi
  • Wagamama
  • Byron Burger
  • Pret a manger
  • Giraffe
  • Pizza Hut
  • Papa Johns
  • and Subway!

Happy holidays and here’s to a happy 2020!

“How long does data masking take?” Part 1

“Exploring the unknown requires tolerating uncertainty.”
Brian Greene

Another in the data masking series and I thought I’d go with an easy one to answer. It’s right up there alongside the other easy questions we all ponder like:

“What is the meaning of life?”
“How do I build a fusion reactor using common household objects?”

and the classic “Can Vegans eat Animal Crackers™?”

The answer to any of the above questions is, of course, it depends.

If you’re content to ‘live life to the fullest, travel and be happy’, you happen to have spare protium and boron-11 just around the house and you’re content to bypass the ethical conundrum that although the crackers look like animals, they do not actually contain any animal products then the above are theoretically answerable, but data masking… whooo that’s a doozy!

There’s no easy way to benchmark this process, honestly. It is something that will be subject to many things you may wish to think about prior to the deed:

  • How many columns are on the tables you want to mask?
  • How many rows are in the tables you want to mask? (Which are the biggest tables?)
  • How many indexes are on the tables? Will these get in the way of a static masking process?
  • Is data duplicated across multiple tables and does it need to be synchronized to maintain referential integrity?
  • How powerful is the machine you’re running the process on?
  • WHERE is the Database you’re running this process on? (Same machine, Network, PaaS…)
  • Will anyone be trying to connect to and running queries on against the Database being masked?
  • Do you need to mask across Databases for consistency?

All of these things will affect the process, both in terms of how you configure it and how long it takes.

I will endeavor though, to give SOME idea. At the very least it’s a good starting point and hopefully if you decide to run any testing of your own you will see similar, or even better performance. For this test I will use Data Masker for SQL Server but in future blogs I’m hoping to test out a few other methods as well, so stay tuned!

For this test, I will be using the following setup – note this isn’t what I would expect a real setup to be, I would expect the staging server being used for masking (and potentially Image creation in SQL Clone) would be a teensy bit more powerful!:

  • Microsoft SQL Server Developer 2016 (64-bit)
  • Microsoft Windows NT 6.3 Server hosted in AWS EC2 (c5.2xlarge)
  • 4 Processors (vCPU), 16 GB Memory
  • Redgate Data Masker version 6.3.20.5034 installed
  • DMDatabase (tables below in Data Masker) – this is my demonstration database in simple recovery mode with:
    – 1,000,000 rows on the DM_Customer table
    – 50,000 rows on the DM_Customer_Notes table
Data Masker Tables Tab showing row counts for large tables

For the purposes of this test I will mask: firstname, lastname, company name, street address, region, country, telephone number and zip code on the DM_Customer table and the notes field on DM_Customer_Notes (a free text field). Once I have completed this, I will then create an email address on DM_Customer using the masked first and last name values, and then finally I will synchronize the masked names across to the DM_Customer_Notes table, where they are duplicated.

The output masking set is as below. It uses a substitution rule for the columns on DM_Customer, a row-internal synchronization rule for the email address, a search-replace rule to randomly replace all of the characters in the free text field and then finally a table to table synchronization rule to copy the masks across to DM_Customer_Notes – for the rules I have enabled Bulk Substitution and set the commit frequency to 100,000 to speed up building and committing the statements (hence Simple Recovery mode):

When run this process takes… well actually I stopped it. It’s paaaaaaainfully slow. This is probably because I’ve not utilized Data Masker’s worker threads to their best and I’m reliant on a single connection to carry out the process – we’ll be here for a while!

So, I’m going to make some performance improvements! I’m going to split the substitution and row-internal rules into ‘Split Range Manager’ rules (video on how to do this here) – This is what it looks like now (rules expanded) so i can utilize concurrent connections to break this big table up:

…and guess what?

Masking stats report

Total run time to mask and synchronize all of these rows was 1 minute 26 seconds. I would say that’s pretty good, and the results set looks great too!

Results set from masking: Contacts

So in conclusion – how long does data masking take? Well, it still depends.

I didn’t have lots of non-clustered indexes over these tables that I had to take care of first, I didn’t have very complex operations being performed as command rules or table to table rules matching on free-text fields, but I did have an integer field I was able to split the range across, so things went my way… this time.

But it can go fast and you can make improvements, as shown above.

If you have any questions on how to improve the performance of your Data Masker masking sets feel free to let me know! Tweet me or get in touch!

Happy masking!

Oh, and Happy Holidays / Merry Christmas / Happy Tuesday! 🙂

I got to watch a team work fast and it was awesome!

“Unless someone like you cares a whole awful lot, nothing is going to get better. It’s not.”
Dr Seuss

I care a great deal about anything I work on, as I mentioned in my previous post here (ironically about not working for once), so it goes without saying that I am super invested in any customer I help to get up and running with any of the Redgate solutions, and none more so than Data Masker.

Ever since it became part of the Redgate family Data Masker has been an integral part of my workday – there aren’t many days where I don’t interact with the tool or it’s concepts in some way and when things go wrong, the tool breaks, something doesn’t work like it should, well not only is it less than ideal for me (showing me up) but it’s not delivering value to the customer.

Now that. I hate with a passion.

However, I’m lucky enough that once that happens, that is not the end of it – we don’t pack everything up and say “well… sorry all, that’s your lot.” No – I get to speak directly to our fantastic support team and the equally as fabulous and helpful development team directly behind the tool, and guess what? They care too. Immensely.

I can think of 2 key examples of this team working in the most incredible way, you wouldn’t even believe (well maybe you would), but it goes to show you what is possible, especially when you break down the silos in your organisation. I never became just “a ticket from a sales engineer”, and this is how they helped me fix 2 problems:

1 – UTF-8 encoding of strings for substitution rules

I was working closely with one of the Business Development folk (little side nod there to Kendra for saying folk so often I’ve started saying it) in Redgate’s sales team who were working with a potential customer in a country where Arabic is the primary language. As such, you would expect them to want to use Data Masker to mask Arabic names like اَمير‎ (“Amir” in English) into data sets, instead of something like “Frank”, which just doesn’t have the same ring to it.

It turns out that in the port across of Data Masker from it’s older v5.5 to the swanky new v6.0 (yes this was a little while back) the ability to change the encoding of strings from user defined data sets had been broken, which meant that the values from Data Masker weren’t being inserted correctly in the table, rendering all of their Arabic sets useless. This was a huge blocker to their trial, which was under time constraints anyway.

I reported this to the Data Masker team on the 7th February 2019 at 12:53pm, created a support ticket for reference at 2.03pm and had personally spoken to them by 3.00pm. The lead developer, support rep, product designer and myself quickly met up to discuss it and agreed that as this was a bug, broken functionality that should exist (and which could block not just this customer but any customer requiring it for other language sets) that they would down tools and work on a fix immediately.

By 9.00am the following day a fix went out the door. Built, tested, deployed. Who did they involve in the testing? Me, initially. Then they waited on feedback from the potential customer, who also confirmed it worked after upgrading.

Wow. That’s what I call fast!

2 – Time-out tennis

More recently (think November 2019) I was working with a customer of ours who I had built up a great relationship with – they were super friendly, super responsive and all round great to work with. Unfortunately as we were getting their masking sets set-up (pun definitely intended) we started encountering time out issues when waiting for masking stats to return.

This was a little irksome, as it was a slight dent in my relationship and credibility and was slowing them down, as it was causing sets to not complete at all.

The problem was though, it wasn’t such an easy fix as with Number 1 above, as it wasn’t exactly clear what was causing the timeouts and I wasn’t really sure what were the best places to check! Fortunately the “Masketeers” as they are more commonly known around Redgate towers, did have an idea of where to look. A nominated member of the development team (and to him I will be forever grateful) almost became a little subdivision of the team – it didn’t require their full might, just someone who knew even more intimately what was happening than me!

Through a few ‘back and forth’s with the customer, experimenting with timeouts and making some tweaks we were able to establish what was going wrong, and ultimately provide a fix. This work became a new branch which was merged into the main base after testing once again and was released the very next day. Finally, the customer let us know it was all working again and sets were completing as they should.

Conclusion?

Sometimes you have the pleasure of working with some unsung heroes where you work, I do it on a regular basis – from the facilities and cleaning team here in the building who do the most incredible work to look after us, to the Sales management team who are constantly looking at ways to make us the best possible company to deal with – Redgate is definitely a place where people can do the best work of their lives.

But on these occasions, I got to witness something special. Cross-functional collaboration. Communication. Empathy. Passion.

And just getting the work done. When it’s about delivering value to customers, feedback, development and testing are everybody’s job, and I’m lucky enough to work with people who put that theory into practice.

Deterministic data masking – the who, who and who? (and how?)

“Security is always excessive until it’s not enough.”
– Robbie Sinclair

You may not already know this about me, but I kinda like data masking.
Scratch that, I LOVE data masking.

Increasingly both around Redgate and in general I seem to be getting a bit of a reputation as “the data masking guy” but for good reason – to include yet another quote, from Joe Kaeser this time: “Data is the oil, some say the gold, of the 21st century…”, more and more I hear stories about people leaving their oil/gold out for everyone to see, opening up the widest attack surface area by doing things like copying backups down into non-Production environments or exposing test systems to the internet – the list goes on.

This means that people turn to all of the protective methods they have available to them: encryption (TDE, row and column level etc.), static and dynamic data masking, access control… and many combinations of them.

One of the big points I always have to cover when it comes to static data masking though, is something called “deterministic masking”, so let’s start with 2 definitions of my own to make sure we’re on the same page:

Static data masking is the process of de-identifying sensitive data-at-rest within the tables of your Database. It is typically used to provide realistic, Production-like data into non-Production environments like Dev and Test, and even sets that are given to 3rd parties. This relies on retaining non-sensitive business specific fields within rows and taking anything considered PII (Personally Identifiable Information) or PHI (Protected Health Information) and either scrambling or replacing it with similar but ultimately false data.

Deterministic data masking is the process of masking data with values in a repeatable way, such that it will give the same value when masked in any and all future runs on any value that matches and will create a new record for values which have not been previously masked. An example of this would be if you were to mask “Chris Unwin” to “Brad Pitt”, it should appear as “Brad Pitt” not only in our (for example) dbo.Contacts table but also all associated tables (regardless of PKFK relationships at the DB level) and every single run should provide the same output. This is useful for building up familiarity with the data and utilizing for future test runs.

Now. I should caveat this blog post (you’ll find I’m always caveating my posts) with the fact that deterministic masking does have it’s benefits but the very idea that one thing should always become another, in my eyes, is inherently less secure than something that always gets de-identified to a different value. As such, I will always recommend that where possible, masking runs should produce differing values. Deterministic masking is also compute heavy because instead of simply randomizing in values, it will have to check up front the value to be replaced and then replace it with the corresponding value, another potential downside if speed is a key driver in the process you’re trying to put in place.

Most masking tools either support deterministic masking directly (*cough* dbatools.io *cough*) or require a little bit of configuration to get started, but it is a workflow that should be catered for, for those who need it. So here is a quick getting started guide for deterministic masking, if you’re writing your own scripts or (as I will in this example) you’re using Data Masker for SQL Server from Redgate.

Step 0 – Figure out where you’re going to do the masking

There are lots of different ways to move data around and get things masked before exposing it to development and testing teams (or handing it to 3rd parties). In this example, I’m going to assume that the only thing I have available to me is a backup and restore process for moving data around, nothing fancy like SQL Clone to help.

So for this I will assume I have a staging instance somewhere that I can use as part of this process, lets call it WIN2019 and the process I’m going to follow is:

  • Restore a full .bak file to WIN2019
  • Carry out the data masking process on this instance
  • Backup the masked DB
  • Move the new .bak into lower environments (restore / make available to Devs)

Step 1 – Map it for multiple future runs

The first problem you have to contend with is needing to maintain a record of how we want to mask things. If we always want to mask the credit card number 3422-567157-24797 to be 3488-904546-46471 then we need to have a place this is stored. The question to ask ourselves here though is WHAT needs to be recorded. There is a huge difference between:

CreditCardBeforeCreditCardAfter
3422-567157-24797 3488-904546-46471

And

CustomerIDCreditCardAfter
1000000001 3488-904546-46471

The latter is obviously far preferable because it does not contain any sensitive PII – it is purely the masked value and a non-sensitive CustomerID which only really makes sense within the company or is a system identifier.

So we should get a mapping location set up on WIN2019, I don’t want to make my tables too wide so I’m going to keep this fairly small and atomic – we’ll create a Database on the server with a mapping table for the credit cards:

CreditCardMap table in SSMS - CustomerID as INT (PK) and CreditCard as nvarchar(60)

This is going to be the basis for our repeated masking. The reason for having this as a separate DB/Table though?

1) The mapping should persist – if it exists in the same DB then we will just overwrite it every time, rendering the mapping useless.
2) Devs/Testers don’t need the mapping – just the end result.

Step 2 – Set up the masking to cover the tables, regardless if there is a mapped value

One of the most important phrases in the GDPR is “Data Protection by Design and Default“, and it’s one of my favorites. In this context I am going to interpret this in a very specific way, and that is: “we must mask everything, before trying to map it back to a value that exists, just in case the link to the MaskingMapper DB were to fail for any reason.

I first restore a copy of the Database I’m going to mask (the DMDatabase) and then setup a Data Masker substitution rule to process the DM_Customer table, de-identifying the credit card numbers:

Data Masker substitution rule to mask Credit Card Numbers with invalid AMEX CC numbers, also (fake) customer credit card Nos. displayed in SSMS

Step 3 – Copy the distinct values across into the mapping table

This step is going to be as simple as writing a single tSQL statement to copy the values across – in Data Masker I will wrap this into a Command Rule and make it dependent on the previous substitution rule:

INSERT INTO MaskingMapper.dbo.CreditCardMap
(
    CustomerID
  , CreditCard
)
SELECT DISTINCT
       customer_id
     , customer_credit_card_number
FROM dbo.DM_CUSTOMER
WHERE (
          customer_credit_card_number <> ''
          AND customer_credit_card_number IS NOT NULL
		  AND customer_id NOT IN (SELECT customer_id FROM MaskingMapper.dbo.CreditCardMap)
      );

Step 4 – Sync everything back together

Finally – we need to bring any information back from the table if it had values written to it in previous runs. In tSQL we could write an UPDATE with an appropriate WHERE clause but I’m going to use an additional controller and Table-to-Table Sync rule in Data Masker to handle this:

Rules in Data Masker - Substitution to mask data, Command rule to update the mapping table and a Table to Table rule to sync back into the table

Result

If we now run this we will have achieved deterministic masking, because we have the following before and afters – first for the DM_Customer table:

DM_Customer Credit cards before
DM_Customer credit cards after masking

and for the CreditCardMap table:

Mapping table prior to masking
mapping table after masking

The mapping table now has 77 rows and if we repeat the masking step by step without changing anything we can see that the credit card numbers change in the first instance, but then synchronize back to the values that should persist, the images below represent just running the first two steps in isolation (i.e. masking everything regardless – left) and then the synchronization job restoring the predetermined values (right) and the mapping table still has 77 rows.

Now if in the next run one of the NULL/blank fields has a real credit card number, or we add any additional customer IDs (i.e. with a more recent backup with fresh data) they can be masked, accounted for and persisted between each run.

Conclusion

Deterministic masking is hard, but it is possible. You can use a number of methods to achieve it, such as the above, but the first question you need to ask yourself (after “do I feel lucky”) is:

“Do I NEED deterministically masked data, or is it a nice to have?”

9 times out of 10, I’m pretty sure the answer will be that it is not essential, and therefore you should focus on making sure the masking of the data is random, static and fast. Adding compute to this process will only slow it down and at the end of the day, we just need to make sure our customers data is protected.

The 2020…ish blogging challenge

“Embrace each challenge in your life as an opportunity for self-transformation.”
– Bernie S. Siegel

I’ve always been a fan of a challenge and my family and wife will attest to the fact that I can be highly competitive. So needless to say on that day I sat with Kendra (see previous post) when she mentioned any kind of challenge I knew I would try and rise to it!

The idea behind this was that Kendra and I would blog every single week starting from… well, now actually! This is my (official) first blog in the 2020 series and it’s going out *hastily checks the date* on the 29th November 2019. “Why now?” I hear you ask, well it’s a great question. Kendra and I agree that having this challenge as somewhat of a “New Years Resolution” just gives us an opportunity to fail, like many resolutions do. There is no time like the present and we very much intend to live by that! So from now until 29th November 2020 I will (hopefully) be blogging… Every. Single. Week.

Utter madness, I know!!! But exciting nonetheless, and I can’t wait.

The rules are very simple:
1) Blog every week
2) You can blog about whatever you like, and
3) If you miss a week £10 (or $10 for Kendra) goes into a pot that at the end of the year gets donated to a charity of your choice.

I’m going to go ahead and supplement the 3rd rule with the fact that I intend to meet this challenge, and as that really only helps me, I will be donating a MINIMUM of £120 (£10 for each month) so that whatever the outcome I am still able to contribute to a great cause.

The charity that I have chosen is Mind, a fantastic UK based mental health charity which provides advice, support and information to help you through whatever mental health issues you may be facing. It is a charity very close to my own heart, not least because it’s a subject that affects us all but because Mind themselves have helped many people; not just in general but those close to me, and myself directly. They helped me to do the very thing I found impossible at one time, talk about it, and I will be forever grateful for that.

With this all in mind I look forward to steaming ahead into 2020 and (hopefully) filling this page with exciting and informative posts – just a sneak peak at what I have lined up, you can expect posts on:

  • Living a plant based lifestyle
  • DevOps
  • Data Masking
  • Azure
  • PowerShell
  • Travel and Culture
  • My feeble attempts at learning Romanian
  • and much much more…

So. Let’s go.