Azure DevOps Masking a.k.a “point, no click”

“[My] kids haven’t responded to my GDPR requests so I don’t think I’m legally allowed to tell them when dinner’s on the table.”
@mrdaveturner

Ah masking. You would have thought I’d be sick of it by now, no? No, fortunately now, more so than ever, I find myself answering question after question and tackling use-case after use-case. So when I was asked this week:

“Chris, is there a way for us to call Data Masker for SQL Server directly from Azure DevOps?”

I thought to myself, well that sounds easy enough… and it was! I know what you’re thinking, c’mon Chris, surely there is more to it? But no, it’s actually pretty straight forward!

I pointed them at the PowerShell module and cmdlets for SQL Provision and the Azure DevOps plugin to automate all of their Provisioning and Masking process, thinking all the while “pffft, they could have made this harder!” and then…

“No sorry Chris, is there a way for us to call JUST Data Masker for SQL Server directly from Azure DevOps?”

Ah! Now that’s an interesting one!

#1 Figure out where you want Data Masking to run in your process

This empty Azure deployment stage looks good enough for now! If you wanted to chain other processes either side of it, that’s cool too! Maybe you have your own provisioning process in place and you want to point Data Masker at it to sanitize it? Makes sense to me! For now I’m going to stick with a single agent job for simplicity.

#2 Figure out what is actually going to run Data Masker

Data Masker is a client install and as such will need to be installed on a *gasp* actual machine!

No but seriously, any server you have lying around, physical or VM will do the trick as long as it meets these requirements. Now this Server/VM will need to have an Azure DevOps agent on it already, which of course is the ideal candidate for being the “thing” that calls Data Masker – this could be the Staging/Non-Functional/Pre-Prod environment also of course, so you could copy down PROD and then immediately invoke masking.

#3 Call the command line from Azure DevOps

In your pipeline steps you can specify the calling of an executable on the machine where the agent resides. Fortunately Data Masker has a wonderful command line available that you can call, you can read all about it here: https://documentation.red-gate.com/dms/data-masker-help/general-topics/about-command-line-automation

The PARFILE you could of course dynamically replace with variables so that it only calls the relevant parameter file for that particular database as well, a nice benefit!

My PARFILE just simply looked like this:

It was calling a local Data Masker set “AzureFun” – now the thing to bear in mind is that Data Masker will run with the Windows authentication credentials that are being run as by the Azure DevOps agent, unless you specify otherwise. In this case because the Azure DevOps agent has the correct permissions to update the databases on this instance anyway I’m fine to use Windows Authentication:

Conclusion

It’s very easy to simply call the command line of Data Masker for SQL Server directly from Azure DevOps, does this same approach work from other CI/CD tools? If they can call executables on the target server then absolutely! So it’s very easily included in the process – you just have to think about where Data Masker is installed and what credentials you’re using for it!

Bonus Point – what about if it’s all Azure SQL Database?

You had to do it didn’t you, you had to say it!

“But Chris, now we know we can call this all from Azure DevOps, what if we wanted to mask and copy Azure SQL Databases into Dev/Test etc.?”

Well actually the good thing is, it’s also pretty similar! When you’re connecting Data Masker to an Azure SQL DB you only need to specify this in the connections in the controller. Again, authentication will likely have to be SQL Auth at this point, and you need to be in Cloud mode, and I’d recommend setting the connection timeout to 10s rather than the standard 5s, but it can still be called as normal from the PARFILE:

So the Data Masker element is reasonably straight forward – that’s the good news. But the thing you REALLY need to stop and think about is:

Where are our Dev and Test copies going to BE?

Option #1: If they’re going to be on VMs or local dev and test servers / developer machines then you could follow a similar approach to one I laid out in this blog post for Redgate in which you create a BACPAC file and split it out on premise before importing it and then provisioning from there. And you could use this code in my Github to achieve something very similar. Caveat: I am no PowerShell guru, who do you think I am? Rob Sewell? Chrissy LeMaire? No. Sadly not. So you can build your own logic around my code though, have at it, I don’t mind! ^_^

Option #2: Keeping everything in Azure. You can copy databases around in Azure and it seems to work pretty well! So I wrote this PowerShell (also in my GitHub for y’all) to effectively copy a PROD DB into the same resource group, mask it and then copy it across to a Dev/Test resource group, dropping the temp copy so as not to incur lots of extra Azure costs (this is just one of the methods I’ve seen people use, again it’s up to you!) – again, see the caveat in option #1 above for my statement on PowerShell! The good thing is, you can use the ‘&’ simply from PowerShell to call Data Masker’s command line.

Either of these options can be run from Azure DevOps also as part of your provisioning or working processes, but instead of including a call to the command line, you can run a fun PowerShell script instead:

Second Conclusion *sigh*

There are lots of ways to get what you need into Dev and Test, but these copies should be masked if they contain personal, identifying information. There are some methods above but there are plenty of others out there on the internet and if you’re not sure about getting started with data masking; try my post here – happy masking!

“Where do I even begin with data masking?” Getting started in 3 steps.

“I like to begin where winds shake the first branch.”
Odysseus Elytis

I’ve already covered data masking a little bit with a few of my first posts on this blog, and for many it will not be new to them. Performance tweaks and determinism are just par for the course in masking data, approaches to solving a well defined problem. But for many, the challenge is not how they mask their data. You’re free to write scripts, PowerShell or use a COTS software solution like Data Masker and the end result is going to be a set of data which you can securely work on; knowing that if you are going to suffer a breach of that data, it will be useless to those who get their hands on it.

The biggest problem here though, is getting started.

That phrase in itself suggests what kind of problem data masking is – anything in this life should be easy to at least get started at, with it becoming more and more complex the more you get into it. One can write a Hello World program quite simply in any major programming language but to start creating rich, functional graphical software, you’re just going to have to learn, write and test (repeat infinitely).

But data masking is like playing ‘capture the flag’ without an opponent team. Capturing the flag is a straight-forward enough objective, but if you don’t know where the flag is, how big it is, what it looks like, how many flags their are etc. then your job is going to be impossible if you just jump straight into it. Like all things in this life, data masking takes careful thought and planning to understand the scale you’re dealing with – and you need to make sure you don’t close your scope too soon by focusing on one system or set of data.

So here are the 3 broad steps I would recommend when starting to protect data in non-production environments:

1 – Catalog your structured data

I cannot emphasize this point enough. Before you do anything, catalog the heck out of your estate. Use anything you like and whilst i’d recommend approaches such as this or this, even an excel sheet is a start. You must know what you hold, everywhere, before you can begin to make decisions about your data. Perhaps the goal of this was to mask the data in pre-Production copies of your CRM system? Well, data doesn’t just exist there in isolation… or maybe it does! You don’t know until you have a record of everything. Every column, every table, every database, every server. There should be a record, tagged with a reasonable value to indicate at the very least the following 4 things:

  1. What system it is used in
  2. Who is in charge of the stewardship of it (i.e. who should be keeping it up to date / ensuring compliance)
  3. How sensitive it is
  4. What kind of data it is

Obviously other values can be specified, like your treatment intent (masking, encryption, both etc.) or your retention policy for specific fields, even something like identifying how the data moves in-out-and-around your estate (data lineage) but at the very least it needs to highlight where the hot-spots of sensitive data exist and what is most at risk. From this you can derive the insight needed to start masking data.

Notice this step is before any specific masking steps, that is because we should have a company wide resource to reference our structured data regardless.

2 – Scope your attack, identify the ‘hit points’

Once you have a record of everything you can begin with the systems that are going to be the most at risk / which databases you should start with. As part of the new project you’re working on, you may need to mask multiple tables or even databases but the size and scale of this process is currently stopping you proceeding.

Identify the key tables or even databases you will need to start with – it is so rare to come across databases where sensitive information is perfectly evenly spread across every table (I’m sure there are some, of course, but many will have concentrations of data) – these ‘usual suspects’ will be in contact, account, address, payment info tables etc. and where we have multiple tables with duplicated information, we want to select the biggest behemoths for this task. The reason for this is that instead of trying to mask all tables equally, you’ll want to mask the biggest sources of truth and then “fan out” your masking, linking them back to the masked tables. Not only is this process quicker but it also allows our masking to be consistent and believable. Draw out diagrams top we can keep the actual masking to a minimum (i.e. just these hit points) and then identify where you can fan this out to. Use these almost as you would stories in development.

This approach keeps the effort of writing scripts of configuring masking rules to a minimum but keeps the impact high – preventing us from wasting time trying to mask a table which, for instance, may only hold Personally Identifiable Information by correlation with other data. Ultimately, once you have this (now hopefully smaller, more focused) list, it’ll be easier to define how you want the masked data to appear.

3 – Write. Test. Validate. Record.

Once you have an understanding of your estate and an understanding of what you’re going to be masking, it’s time to start getting some indicative masked values mapped across (names for names, dates for date of birth etc.) but this is not a big bang approach.

Just like software development this process is best done iteratively. If you try to write one masking script or rule set you will gain 2 things:

  1. A thing that only you know how it works
  2. A thing that only you can maintain

Start with a copy of the database. Restore it to somewhere locked down that only a very select few have access to. Build the first part of your script to reflect the first of the ‘stories’ you identified in 2). i.e. “Mask the names to be something believable”. Does this work?

No, why not?
Yes, perfect.

You should even ask for feedback on the masked data from developers, they will be working with it so it makes sense to ensure you collaborate with them on what they would like to see.

So it’s time to record what you’ve done, just as anyone would do with any other software process. If only there was some way for us to do this… oh wait!

PUT IT IN SOURCE CONTROL.

It doesn’t matter if you’ve written a script or created part of a Data Masker masking set, it should now be put into source control, properly commented, and stored so that it is easily accessed, updated and implemented. Grant Fritchey actually wrote a great article about doing this here!

Now build up those rules, tackle your stories and gradually try to create enough to mask each of the ‘hit points’ and then fan them out to the secondary tables. Once you have kept this effort minimal but high impact, you’ll be able to try this in earnest. Once you have tackled your high risk targets then you can add stories to add specific test cases or oddities required in Dev and Test.

The point is to start somewhere, like I said, getting started is the hard part, but once you know what you have, where it is and how it’s structured, how you actually mask the data is a breeze.

“How long does data masking take?” Part 1

“Exploring the unknown requires tolerating uncertainty.”
Brian Greene

Another in the data masking series and I thought I’d go with an easy one to answer. It’s right up there alongside the other easy questions we all ponder like:

“What is the meaning of life?”
“How do I build a fusion reactor using common household objects?”

and the classic “Can Vegans eat Animal Crackers™?”

The answer to any of the above questions is, of course, it depends.

If you’re content to ‘live life to the fullest, travel and be happy’, you happen to have spare protium and boron-11 just around the house and you’re content to bypass the ethical conundrum that although the crackers look like animals, they do not actually contain any animal products then the above are theoretically answerable, but data masking… whooo that’s a doozy!

There’s no easy way to benchmark this process, honestly. It is something that will be subject to many things you may wish to think about prior to the deed:

  • How many columns are on the tables you want to mask?
  • How many rows are in the tables you want to mask? (Which are the biggest tables?)
  • How many indexes are on the tables? Will these get in the way of a static masking process?
  • Is data duplicated across multiple tables and does it need to be synchronized to maintain referential integrity?
  • How powerful is the machine you’re running the process on?
  • WHERE is the Database you’re running this process on? (Same machine, Network, PaaS…)
  • Will anyone be trying to connect to and running queries on against the Database being masked?
  • Do you need to mask across Databases for consistency?

All of these things will affect the process, both in terms of how you configure it and how long it takes.

I will endeavor though, to give SOME idea. At the very least it’s a good starting point and hopefully if you decide to run any testing of your own you will see similar, or even better performance. For this test I will use Data Masker for SQL Server but in future blogs I’m hoping to test out a few other methods as well, so stay tuned!

For this test, I will be using the following setup – note this isn’t what I would expect a real setup to be, I would expect the staging server being used for masking (and potentially Image creation in SQL Clone) would be a teensy bit more powerful!:

  • Microsoft SQL Server Developer 2016 (64-bit)
  • Microsoft Windows NT 6.3 Server hosted in AWS EC2 (c5.2xlarge)
  • 4 Processors (vCPU), 16 GB Memory
  • Redgate Data Masker version 6.3.20.5034 installed
  • DMDatabase (tables below in Data Masker) – this is my demonstration database in simple recovery mode with:
    – 1,000,000 rows on the DM_Customer table
    – 50,000 rows on the DM_Customer_Notes table
Data Masker Tables Tab showing row counts for large tables

For the purposes of this test I will mask: firstname, lastname, company name, street address, region, country, telephone number and zip code on the DM_Customer table and the notes field on DM_Customer_Notes (a free text field). Once I have completed this, I will then create an email address on DM_Customer using the masked first and last name values, and then finally I will synchronize the masked names across to the DM_Customer_Notes table, where they are duplicated.

The output masking set is as below. It uses a substitution rule for the columns on DM_Customer, a row-internal synchronization rule for the email address, a search-replace rule to randomly replace all of the characters in the free text field and then finally a table to table synchronization rule to copy the masks across to DM_Customer_Notes – for the rules I have enabled Bulk Substitution and set the commit frequency to 100,000 to speed up building and committing the statements (hence Simple Recovery mode):

When run this process takes… well actually I stopped it. It’s paaaaaaainfully slow. This is probably because I’ve not utilized Data Masker’s worker threads to their best and I’m reliant on a single connection to carry out the process – we’ll be here for a while!

So, I’m going to make some performance improvements! I’m going to split the substitution and row-internal rules into ‘Split Range Manager’ rules (video on how to do this here) – This is what it looks like now (rules expanded) so i can utilize concurrent connections to break this big table up:

…and guess what?

Masking stats report

Total run time to mask and synchronize all of these rows was 1 minute 26 seconds. I would say that’s pretty good, and the results set looks great too!

Results set from masking: Contacts

So in conclusion – how long does data masking take? Well, it still depends.

I didn’t have lots of non-clustered indexes over these tables that I had to take care of first, I didn’t have very complex operations being performed as command rules or table to table rules matching on free-text fields, but I did have an integer field I was able to split the range across, so things went my way… this time.

But it can go fast and you can make improvements, as shown above.

If you have any questions on how to improve the performance of your Data Masker masking sets feel free to let me know! Tweet me or get in touch!

Happy masking!

Oh, and Happy Holidays / Merry Christmas / Happy Tuesday! 🙂

I got to watch a team work fast and it was awesome!

“Unless someone like you cares a whole awful lot, nothing is going to get better. It’s not.”
Dr Seuss

I care a great deal about anything I work on, as I mentioned in my previous post here (ironically about not working for once), so it goes without saying that I am super invested in any customer I help to get up and running with any of the Redgate solutions, and none more so than Data Masker.

Ever since it became part of the Redgate family Data Masker has been an integral part of my workday – there aren’t many days where I don’t interact with the tool or it’s concepts in some way and when things go wrong, the tool breaks, something doesn’t work like it should, well not only is it less than ideal for me (showing me up) but it’s not delivering value to the customer.

Now that. I hate with a passion.

However, I’m lucky enough that once that happens, that is not the end of it – we don’t pack everything up and say “well… sorry all, that’s your lot.” No – I get to speak directly to our fantastic support team and the equally as fabulous and helpful development team directly behind the tool, and guess what? They care too. Immensely.

I can think of 2 key examples of this team working in the most incredible way, you wouldn’t even believe (well maybe you would), but it goes to show you what is possible, especially when you break down the silos in your organisation. I never became just “a ticket from a sales engineer”, and this is how they helped me fix 2 problems:

1 – UTF-8 encoding of strings for substitution rules

I was working closely with one of the Business Development folk (little side nod there to Kendra for saying folk so often I’ve started saying it) in Redgate’s sales team who were working with a potential customer in a country where Arabic is the primary language. As such, you would expect them to want to use Data Masker to mask Arabic names like اَمير‎ (“Amir” in English) into data sets, instead of something like “Frank”, which just doesn’t have the same ring to it.

It turns out that in the port across of Data Masker from it’s older v5.5 to the swanky new v6.0 (yes this was a little while back) the ability to change the encoding of strings from user defined data sets had been broken, which meant that the values from Data Masker weren’t being inserted correctly in the table, rendering all of their Arabic sets useless. This was a huge blocker to their trial, which was under time constraints anyway.

I reported this to the Data Masker team on the 7th February 2019 at 12:53pm, created a support ticket for reference at 2.03pm and had personally spoken to them by 3.00pm. The lead developer, support rep, product designer and myself quickly met up to discuss it and agreed that as this was a bug, broken functionality that should exist (and which could block not just this customer but any customer requiring it for other language sets) that they would down tools and work on a fix immediately.

By 9.00am the following day a fix went out the door. Built, tested, deployed. Who did they involve in the testing? Me, initially. Then they waited on feedback from the potential customer, who also confirmed it worked after upgrading.

Wow. That’s what I call fast!

2 – Time-out tennis

More recently (think November 2019) I was working with a customer of ours who I had built up a great relationship with – they were super friendly, super responsive and all round great to work with. Unfortunately as we were getting their masking sets set-up (pun definitely intended) we started encountering time out issues when waiting for masking stats to return.

This was a little irksome, as it was a slight dent in my relationship and credibility and was slowing them down, as it was causing sets to not complete at all.

The problem was though, it wasn’t such an easy fix as with Number 1 above, as it wasn’t exactly clear what was causing the timeouts and I wasn’t really sure what were the best places to check! Fortunately the “Masketeers” as they are more commonly known around Redgate towers, did have an idea of where to look. A nominated member of the development team (and to him I will be forever grateful) almost became a little subdivision of the team – it didn’t require their full might, just someone who knew even more intimately what was happening than me!

Through a few ‘back and forth’s with the customer, experimenting with timeouts and making some tweaks we were able to establish what was going wrong, and ultimately provide a fix. This work became a new branch which was merged into the main base after testing once again and was released the very next day. Finally, the customer let us know it was all working again and sets were completing as they should.

Conclusion?

Sometimes you have the pleasure of working with some unsung heroes where you work, I do it on a regular basis – from the facilities and cleaning team here in the building who do the most incredible work to look after us, to the Sales management team who are constantly looking at ways to make us the best possible company to deal with – Redgate is definitely a place where people can do the best work of their lives.

But on these occasions, I got to witness something special. Cross-functional collaboration. Communication. Empathy. Passion.

And just getting the work done. When it’s about delivering value to customers, feedback, development and testing are everybody’s job, and I’m lucky enough to work with people who put that theory into practice.

Deterministic data masking – the who, who and who? (and how?)

“Security is always excessive until it’s not enough.”
– Robbie Sinclair

You may not already know this about me, but I kinda like data masking.
Scratch that, I LOVE data masking.

Increasingly both around Redgate and in general I seem to be getting a bit of a reputation as “the data masking guy” but for good reason – to include yet another quote, from Joe Kaeser this time: “Data is the oil, some say the gold, of the 21st century…”, more and more I hear stories about people leaving their oil/gold out for everyone to see, opening up the widest attack surface area by doing things like copying backups down into non-Production environments or exposing test systems to the internet – the list goes on.

This means that people turn to all of the protective methods they have available to them: encryption (TDE, row and column level etc.), static and dynamic data masking, access control… and many combinations of them.

One of the big points I always have to cover when it comes to static data masking though, is something called “deterministic masking”, so let’s start with 2 definitions of my own to make sure we’re on the same page:

Static data masking is the process of de-identifying sensitive data-at-rest within the tables of your Database. It is typically used to provide realistic, Production-like data into non-Production environments like Dev and Test, and even sets that are given to 3rd parties. This relies on retaining non-sensitive business specific fields within rows and taking anything considered PII (Personally Identifiable Information) or PHI (Protected Health Information) and either scrambling or replacing it with similar but ultimately false data.

Deterministic data masking is the process of masking data with values in a repeatable way, such that it will give the same value when masked in any and all future runs on any value that matches and will create a new record for values which have not been previously masked. An example of this would be if you were to mask “Chris Unwin” to “Brad Pitt”, it should appear as “Brad Pitt” not only in our (for example) dbo.Contacts table but also all associated tables (regardless of PKFK relationships at the DB level) and every single run should provide the same output. This is useful for building up familiarity with the data and utilizing for future test runs.

Now. I should caveat this blog post (you’ll find I’m always caveating my posts) with the fact that deterministic masking does have it’s benefits but the very idea that one thing should always become another, in my eyes, is inherently less secure than something that always gets de-identified to a different value. As such, I will always recommend that where possible, masking runs should produce differing values. Deterministic masking is also compute heavy because instead of simply randomizing in values, it will have to check up front the value to be replaced and then replace it with the corresponding value, another potential downside if speed is a key driver in the process you’re trying to put in place.

Most masking tools either support deterministic masking directly (*cough* dbatools.io *cough*) or require a little bit of configuration to get started, but it is a workflow that should be catered for, for those who need it. So here is a quick getting started guide for deterministic masking, if you’re writing your own scripts or (as I will in this example) you’re using Data Masker for SQL Server from Redgate.

Step 0 – Figure out where you’re going to do the masking

There are lots of different ways to move data around and get things masked before exposing it to development and testing teams (or handing it to 3rd parties). In this example, I’m going to assume that the only thing I have available to me is a backup and restore process for moving data around, nothing fancy like SQL Clone to help.

So for this I will assume I have a staging instance somewhere that I can use as part of this process, lets call it WIN2019 and the process I’m going to follow is:

  • Restore a full .bak file to WIN2019
  • Carry out the data masking process on this instance
  • Backup the masked DB
  • Move the new .bak into lower environments (restore / make available to Devs)

Step 1 – Map it for multiple future runs

The first problem you have to contend with is needing to maintain a record of how we want to mask things. If we always want to mask the credit card number 3422-567157-24797 to be 3488-904546-46471 then we need to have a place this is stored. The question to ask ourselves here though is WHAT needs to be recorded. There is a huge difference between:

CreditCardBeforeCreditCardAfter
3422-567157-24797 3488-904546-46471

And

CustomerIDCreditCardAfter
1000000001 3488-904546-46471

The latter is obviously far preferable because it does not contain any sensitive PII – it is purely the masked value and a non-sensitive CustomerID which only really makes sense within the company or is a system identifier.

So we should get a mapping location set up on WIN2019, I don’t want to make my tables too wide so I’m going to keep this fairly small and atomic – we’ll create a Database on the server with a mapping table for the credit cards:

CreditCardMap table in SSMS - CustomerID as INT (PK) and CreditCard as nvarchar(60)

This is going to be the basis for our repeated masking. The reason for having this as a separate DB/Table though?

1) The mapping should persist – if it exists in the same DB then we will just overwrite it every time, rendering the mapping useless.
2) Devs/Testers don’t need the mapping – just the end result.

Step 2 – Set up the masking to cover the tables, regardless if there is a mapped value

One of the most important phrases in the GDPR is “Data Protection by Design and Default“, and it’s one of my favorites. In this context I am going to interpret this in a very specific way, and that is: “we must mask everything, before trying to map it back to a value that exists, just in case the link to the MaskingMapper DB were to fail for any reason.

I first restore a copy of the Database I’m going to mask (the DMDatabase) and then setup a Data Masker substitution rule to process the DM_Customer table, de-identifying the credit card numbers:

Data Masker substitution rule to mask Credit Card Numbers with invalid AMEX CC numbers, also (fake) customer credit card Nos. displayed in SSMS

Step 3 – Copy the distinct values across into the mapping table

This step is going to be as simple as writing a single tSQL statement to copy the values across – in Data Masker I will wrap this into a Command Rule and make it dependent on the previous substitution rule:

INSERT INTO MaskingMapper.dbo.CreditCardMap
(
    CustomerID
  , CreditCard
)
SELECT DISTINCT
       customer_id
     , customer_credit_card_number
FROM dbo.DM_CUSTOMER
WHERE (
          customer_credit_card_number <> ''
          AND customer_credit_card_number IS NOT NULL
		  AND customer_id NOT IN (SELECT customer_id FROM MaskingMapper.dbo.CreditCardMap)
      );

Step 4 – Sync everything back together

Finally – we need to bring any information back from the table if it had values written to it in previous runs. In tSQL we could write an UPDATE with an appropriate WHERE clause but I’m going to use an additional controller and Table-to-Table Sync rule in Data Masker to handle this:

Rules in Data Masker - Substitution to mask data, Command rule to update the mapping table and a Table to Table rule to sync back into the table

Result

If we now run this we will have achieved deterministic masking, because we have the following before and afters – first for the DM_Customer table:

DM_Customer Credit cards before
DM_Customer credit cards after masking

and for the CreditCardMap table:

Mapping table prior to masking
mapping table after masking

The mapping table now has 77 rows and if we repeat the masking step by step without changing anything we can see that the credit card numbers change in the first instance, but then synchronize back to the values that should persist, the images below represent just running the first two steps in isolation (i.e. masking everything regardless – left) and then the synchronization job restoring the predetermined values (right) and the mapping table still has 77 rows.

Now if in the next run one of the NULL/blank fields has a real credit card number, or we add any additional customer IDs (i.e. with a more recent backup with fresh data) they can be masked, accounted for and persisted between each run.

Conclusion

Deterministic masking is hard, but it is possible. You can use a number of methods to achieve it, such as the above, but the first question you need to ask yourself (after “do I feel lucky”) is:

“Do I NEED deterministically masked data, or is it a nice to have?”

9 times out of 10, I’m pretty sure the answer will be that it is not essential, and therefore you should focus on making sure the masking of the data is random, static and fast. Adding compute to this process will only slow it down and at the end of the day, we just need to make sure our customers data is protected.