Provisioning local Dev copies from AWS RDS (SQL Server)

“It’s still magic even if you know how it’s done.”
Terry Pratchett

For a long time now I have worked heavily on Redgate’s SQL Provision software. I was involved from the very beginning when SQL Clone was but a few months old and just finding it’s little sheepish (teehee) legs in the world and before Redgate Data Masker was “a thing”.

The one thing I’ve never comprehensively been able to help a customer with though, was PaaS. Platform as a Service has been something that has plagued my time with this wonderful software and that is because you simply cannot take an Image (one of SQL Clones VHDX files) from an Azure SQL Database or from an Amazon AWS RDS Instance directly, helpfully.

But then in January 2019 I did some research and I wrote this article on how you could achieve Provisioning from Azure, through the BACPAC file export method, this was great and several customers decided this method was good enough for them and have adopted it, and in fact completely PowerShell-ed out the process (links to something similar in my GitHub which I used for my PASS Summit Demo 2019), however this never solved my AWS problem.

I’ll be the first to admit, I didn’t even try. AWS for me was “here be dragons” and I was a complete n00b; I didn’t even know what the dashboard would look like! However, in early December 2019 I was on a call with a customer who mentioned that they would like to Provision directly from RDS SQL Server and they don’t want any “additional hops” like the BACPAC Azure method. On the same day, Kendra Little (sorry Kendra, you seem to be the hero of most of my blogs!) shared some insight that it was possible, with AWS, to output .bak files directly to an S3 bucket. That got me thinking, if we can get access to a .bak file directly from S3, surely we could provision it all the way to dev with little- to-no involvement in the process?

My reaction to this news was that it was the universe telling me to get off my backside and to do some thing about it, so with renewed determination and looking a little bit like this:

Ready Lets Go GIF by Leroy Patterson

I set off into the world of AWS.

1 – Setup

Now naturally, I am not a company. Shock. So i don’t have any pre-existing infrastructure available in AWS for me to tinker with, and that was the first challenge. “Can I use anything in AWS for free?” – The answer? Actually, yes! AWS has a free tier for people like myself who are reeeeeally stingy curious which at the very least will let me better understand how to interact with the various moving parts for this.

First step. I’m going to need a Database in RDS, so I went to my trusty DMDatabase (scripts here for y’all) which I use for EVERYTHING, on-premise, in Azure, EV-ERY-THING.

In AWS I went to RDS and setup a SQL Server Express instance called dmdatabaseprod (which fortunately kept it on the free tier). Luckily, AWS provides an easy getting started tutorial for this which you can find here – why re-invent the wheel? After creating the DB I had some major issues actually connecting to it in SQL Server Management Studio; I thought I had allowed all the correct ports for traffic, put it in the right security groups blah blah blah… and guess what it was?

Public accessibility. Was set. To “No“. *cough* well that problem was sorted quickly so it was onto the next challenge.

2 – Backing up to an S3 bucket

I can take no credit for this step whatsoever because it was the wonderful Josh Burns Tech who saved me. He created a video showing exactly what I wanted to do and you can see this, with full instructions and scripts here: https://joshburnstech.com/2019/06/aws-rds-sql-server-database-restore-and-backup-using-s3/

After following the advice of Josh and walking through his steps, getting a new S3 bucket setup and configured and creating a new backup of my DMDatabase, I was a good step of the way there! As you can see my .bak was nicely sat in my S3 bucket – marvelous!

3 – Making the S3 bucket visible to SQL Server

This was the tricky bit. My approach to solving this problem was “I need SQL Server to be able to see the .bak file to be able to create an image and clones from it. So, logically, I need it to be mapped as a network drive of some kind?” – simple, no? It turns out that it was the best approach from what I found online but there were a number of ways I found of tackling it.

I started out using this article from Built With Cloud which was super informative and helpful, I managed to get rClone running and the S3 bucket was showing as a local drive, which was exactly what I wanted:

But I ran into a problem – SQL Server could not access the mapped drive.

So is there another way? I found a bunch of resources online for CloudBerry, TnT Drive and MountainDuck but, like I mentioned I’m on a very limited budget ($0) so naturally… I put this on twitter. I received a tonne of replies giving some examples and some ideas and the one idea that kept coming up time and time again was AWS Storage Gateway. I had never heard of it, nor did I have any idea of how it worked.

So. Back to Google (or in my case Ecosia, it’s a search engine that plants trees if you search with them, what’s not to love???)

To simplify it. Storage Gateway is solution that is deployed “on-premise” i.e. as a hardware gateway appliance or a virtual machine, and it allows you to effectively use your S3 (or other AWS cloud storage service) locally by acting as the middle-person between AWS and your on-premise systems, and it does fancy local caching which means super low latency network and disk performance. There are a few different types you can utilize but for this exercise I went with “File Gateway”, from Amazon: “A File Gateway is a type of Storage Gateway used to integrate your existing on-premise application with the Amazon S3. It provides NFS (Network File System) and SMB (Server Message Block) access to data in S3 for any workloads that require working with objects.”

Sounds ideal. Time to set it up!

I have access to VMWare Workstation Pro on my machine so I downloaded the OVF template for VMWare ESXi and loaded it up in VMWare (the default username and password threw me a little but it turns out it’s admin and password as standard, and you can change it as you configure):

Then it was a bit of a checkbox exercise from there:

Now I wasn’t 101% sure of exactly how best to set up my fancy new Gateway, so fortunately I found this super helpful video, funnily enough from Teach Me Cloud as opposed to the aforementioned Built With Cloud, and although it was a little out of date, I also had one of Redgate’s finest engineers (the wonderful Nick) on hand to help out. Between the video and us (mostly Nick) we were able to get everything connected!

But I ran into the same problem. SQL Server couldn’t access the backup file.

angry hate GIF

Fortunately though, after some frantic Googling we managed to find a very straightforward article that fixed all of our pain! We needed to map the drive in SQL Server itself – Thanks Ahmad! Now, yes, I did use XP_CMDSHELL (insert DBAReactions gif at the mere mention of it here) but this was for testing purposes anyway, I’m sure there are other ways to get around this problem!

…and guess what? It worked. Huzzah!

If you can substitute my poorly named image “blah” and assume instead it says “HeyLookImAnImageFromAnRDSBackupFileArentIClever“, this means I can now schedule my PowerShell process to create new images at the beginning of every week to refresh my DMDatabase Clone environment, no manual steps needed!

Conclusion

Whilst there are a number of steps involved, you can easily take advantage of some of the fantastic features offered by AWS like Storage Gateway and even if your database is hosted in RDS, you can fully provision copies back into IaaS (Infrastructure as a Service) or On-Premise workflows to keep your Pre-Production copies up to date and useful in development!

Just remember to mask it too!

P.S. I’m sure you could probably find some clever way of using the free rClone method I also mentioned and having this readable by SQL Server, but I haven’t figured it out yet, but will blog when I do!

DBAle hits 5000 listens

We see our customers as invited guests to a party, and we are the hosts. It’s our job every day to make every important aspect of the customer experience a little bit better.”
Jeff Bezos

The date is June 6th 2018. Our producer Louise has just setup a slack channel called #dbale with the topic: “Hi, thought i’d just set this up as it might be the easiest way to chat about the podcast idea at this early stage.” purely off the back of a conversation we had after a practice run for this webinar where I suggested to Chris and Lou that I loved working with the 2 of them, and that we should do some kind of podcast together.

We produced our first DBAle episode and it went live around the 12th July 2018, the production quality was very low, like abysmal and we didn’t really know what we were doing, but it was up. It had it’s own landing page, we were on Spotify and Google Play Music. It was “a thing”.

Little did we know though, it started something. I will never try to quantify that something because it was a weird something, something that could only be produced by 2 Chris’ getting in front of a microphone, drinking a beer and talking about data.

There are a hundred different podcasts out there that are super good if you’re looking to learn more about data like SQL Data Partners or Dear SQL DBA and we wanted to be a bit different. So fortunately people are able to join us on our own journey of discovery and also have a laugh on the way!

Fortunately, as a podcast backed by Redgate we get the added benefit of having access to some superstars as well, like episode 17 with the incredible Steve Jones, or the live episode with the incredible Chris Yates. I have to admit I’m very excited for the next few episodes for this very reason! Look out for February’s episode as it’s sure to be wonderful, and hopefully we’ll even persuade none other than Kendra Little herself to join us soon! (Hint, Kendra, hint!!)

I guess this blog post wasn’t to give insight into how to sustain a podcast, keep motivated or even what the formula for success is, as I’m not sure we know any of these (at least 1 point would be invest in a good microphone, we found that out the hard way). But there is one thing I do know, we keep doing what we love and at time of writing we are at 5,300 unique downloads, so I wanted to take the opportunity to say thank you.

Thank you to everyone who has supported us, thank you to everyone who has been on or helped produce the show, and a massive thank you to everyone who has listened.

Here’s to the next 5000 listens, a year of fun, beer and friendship… oh and data. Here’s a picture of the crate of beer and scratch off bucket list Redgate (thanks Louise!) got us for hitting the 5,000 mark:

P.S. Everyone asks me what my favorite episode of DBAle was so far, so if you’re interested then check out these two (as I couldn’t pick!)

Servers that go bump in the night(time): https://www.red-gate.com/hub/events/online-events/dbale-05-servers-go-bump-nighttime

Monitor the state of the 8%: https://www.red-gate.com/hub/events/online-events/dbale-09-monitor-the-state-of-the-8

Using things weirdly – Part 1: The SQL Clone Config Database

“If the challenge exists, so must the solution.”
Rona Mlnarik

I like to use things in the way they weren’t intended. It leads to behaviors you weren’t expecting… and that’s pretty fun.

Often though, when you work for a software company, people want your solutions to do things they just were not designed to do, or have no capabilities to do at that exact point in time. The answer then that will often come back to you is a resounding: “I’m sorry, that’s not available at this time.” of course that is if you’re lucky enough to get an answer at all.

It’s no secret though, none of us who answer those questions like saying that. There is nothing worse than not being able to help people with what they’re trying to do.

Now don’t get me wrong – some people’s questions should be challenged. You cannot immediately assume that the person you’re talking to has all of the available variables, or has thought of any alternative means of achieving their end goal – which is why it is always important to ask ‘the why behind the why’. Every question has at least one immediate answer:

“I am going to buy a car.”
“Why?”
“Because I want to drive to work.”

This seems like a perfectly reasonable response. But what if we haven’t thought about every available option here? I can take this at face value and sell you a car, but i haven’t asked where you live and work; perhaps you live within the congestion zone in London where driving is impractical. Do you actually have off-street parking available? Will you need the car for other trips or just for work? With a little additional questioning and thinking we might establish that you live right near a London Underground station and it would be more cost effective and slightly faster to instead commute by Tube.

This is not just a sales tactic, rather one that should be used to better understand ever requirement, personal or professional – there is no substitute for good understanding.

But there are times when a perfectly reasonable request is made with good justification and unfortunately you’re still unable to help. But sometimes there are still ways to help, even if it doesn’t natively fall within the remit of the solution you’re working with. I had one such question recently:

“I would like to receive a list of Clones per image that have been created, which are older than 2 weeks and who created them, so I know which developers to nudge to refresh their Cloned environments and so we can create new images.”

Again this is a perfectly reasonable request, and the obvious answer was, well, “check the dashboard”, there’s a full list there with all of that information. When you have a good many developers all using SQL Clone though, it can get a little difficult to scroll through everything and identify the culprits. SQL Clone has no native alerting or reporting (as of writing) so it was assumed that was the best we could do.

Like I said though, I like to use things as they weren’t intended. Tee-hee.

When you install SQL Clone it creates a back-end SQLClone_Config database on an instance of your choosing (normally where you’ve installed the solution) for recording operations carried out within the software. This database is structured really, really logically that anyone can understand it and leverage it to meet their requirements.

By thinking a little more about their requirement we realized that it would actually be possible to leverage the database by running a very small snippet of T-SQL which could show us:

  • The name of the Clone
  • Who created it
  • Which Image it belongs to
  • When it was created and,
  • How old it is

This was as straightforward as:

SELECT C.Name                                AS CloneName
     , C.CreatedBy                           AS CloneOwner
     , I.FriendlyName                        AS BelongsToImage
     , O.Timestamp                           AS CreatedOn
     , DATEDIFF(DAY, O.Timestamp, GETDATE()) AS AgeOfCloneInDays
FROM dbo.Clones              C
    LEFT JOIN dbo.Images     I
        ON I.Id = C.ParentImageId
    LEFT JOIN dbo.Operations O
        ON I.Id = O.ImageId
WHERE O.Type = 1
      AND O.Timestamp < DATEADD(DAY, -14, GETDATE())
      AND C.DeletedOn IS NULL;

This gave us exactly what we needed:

It was just that simple. But we didn’t want to leave it there! So we set up an email to be sent to the heads of each development team from a SQL Server Agent Job with the output of this query each Monday morning so they could raise it with the relevant members just after stand up. Now I had never done this before so of course who would come to my rescue but the wonderful, knowledgeable (and just all round awesome-sauce-person) Jes Schultz with this post from 2014!

And with that – we used something that isn’t exactly supposed to be used in this way, but got something valuable up and running for their teams. It’s a simple answer, but it was quick, and it worked.

What’s a quick win you’ve had recently?

Not letting stress take over, give yourself a moment

“The time to relax is when you don’t have time for it.”
Sydney J. Harris

I work in a bustling, high-pressure environment. As a Sales engineer I could be called on to do, well, pretty much anything at the last minute – and particularly at the end of quarters when there are big pushes to try and hit various targets you never know quite what you’ll be doing day to day. Will I be on site with a customer this week? Will I be doing 6 product demonstrations in a single day or one long 3-hour remote troubleshooting session?

It could be anything.

It makes my job exciting and I love the prospect of having to be on my metaphorical toes, but for me and those around me it can be exhausting. Yes it’s exciting, but never getting what I like to call “work down time” i.e. time you can use to learn something knew, tinker with a problem you’ve been thinking about for a while, can become detrimental to your mental state as the pressure starts to build.

This problem isn’t restricted to a sales environment – it can occur anywhere there is a high pressure workload, deadlines or unpredictability.

Stress is a hormonal response from the body. Adrenaline and cortisol (and others) force your body into this “ready” state where you’re constantly ready to fight or flee and it is a state that should be reserved for occasions we require it. To be in a high pressure, high stress job where you constantly feel worn out, over worked and anxious for what the day holds in store can be not only problematic for your workload as you try in vain to keep up with everything (and potentially let standards slip) but it can also have big ramifications for your health, including (but not limited to):

  • Less and/or worse quality sleep
  • High blood pressure
  • Heart problems
  • Skin irritation
  • Anxiety
  • Headaches / Migraines

It’s obvious when you’re giving in to stress because you start making excuses. When we’re most stressed that’s when we find ourselves identifying ways to put off tackling the thing that is causing us the issue(s) or normalizing and rationalizing the problem. We’ve all been students at one point, putting off working for deadlines “well, I can pretty much get it done next week I’m sure” and even now as adults we start figuring out how much time we can sacrifice around it “well if I come in at 5am and just crack on with it, because no one else will be in the office…“. This is just another way for stressful activities to play on your mind and eat into our personal time and even our sleep. But that stuff is muy importante and actually, you don’t have to put yourself through that; many people consider stress to be a normal part of the job they lead like prison guards, astronauts and doctors. The key is to use stress to your advantage, be focused on the job at hand, but don’t let it overwhelm you, whatever it is you do.

Stress has always been something I’ve had difficulties overcoming and it wasn’t until 2019 when my wife and I ran an Action For Happiness “Exploring What Matters” course (check here for any courses running near you – they’re super cool!) that I realized I didn’t have to be a slave to stress.

There are so many coping techniques for stress but I wanted to just share 1 with you today (and maybe others in the future), but this is a technique I discovered in that Exploring What Matters course that you can put into action right now.

It has long been proven that meditation can have incredible health benefits for those who practice it, but the common feedback I hear on it is “but I don’t have time to meditate in the middle of the day! I have a job to do!” – whilst this may be true, meditation doesn’t just have to be sitting in a quiet room, cross-legged saying “hummmmmm” whilst sniffing incense for an hour until you find inner peace.

The video below will walk you through taking just a moment in the middle of your day to re-focus, to help you deal with stress. Sometimes we carry stress with us from call to call or meeting to meeting and all it can take is for us to deal with that build up to prevent it from affecting us and our work. I loved the course because it made me look at stress for what it was – not a big ball of mess that I had to carry everywhere with me and could do nothing about – rather, something I could choose not to feel if i didn’t want to.

I hope this video helps you as much as it’s helped me.

One thing I will say in closing though, and that is if you find stress is a big part of your daily life and it makes you agitated, anxious and weary, meditation might not be enough to help you get through. Stress can be like a big heavy ball you constantly feel is hanging from your neck, pulling you down and restricting your airways. However, things can change and you can change them. Speak to your boss, your friends and family, even a therapist about what is stressing you out; they may hold the key to help you unlock the root of the stress and therein lies the way to releasing it.

You are not alone, ever.

"Where do I even begin with data masking?" Getting started in 3 steps.

“I like to begin where winds shake the first branch.”
Odysseus Elytis

I’ve already covered data masking a little bit with a few of my first posts on this blog, and for many it will not be new to them. Performance tweaks and determinism are just par for the course in masking data, approaches to solving a well defined problem. But for many, the challenge is not how they mask their data. You’re free to write scripts, PowerShell or use a COTS software solution like Data Masker and the end result is going to be a set of data which you can securely work on; knowing that if you are going to suffer a breach of that data, it will be useless to those who get their hands on it.

The biggest problem here though, is getting started.

That phrase in itself suggests what kind of problem data masking is – anything in this life should be easy to at least get started at, with it becoming more and more complex the more you get into it. One can write a Hello World program quite simply in any major programming language but to start creating rich, functional graphical software, you’re just going to have to learn, write and test (repeat infinitely).

But data masking is like playing ‘capture the flag’ without an opponent team. Capturing the flag is a straight-forward enough objective, but if you don’t know where the flag is, how big it is, what it looks like, how many flags their are etc. then your job is going to be impossible if you just jump straight into it. Like all things in this life, data masking takes careful thought and planning to understand the scale you’re dealing with – and you need to make sure you don’t close your scope too soon by focusing on one system or set of data.

So here are the 3 broad steps I would recommend when starting to protect data in non-production environments:

1 – Catalog your structured data

I cannot emphasize this point enough. Before you do anything, catalog the heck out of your estate. Use anything you like and whilst i’d recommend approaches such as this or this, even an excel sheet is a start. You must know what you hold, everywhere, before you can begin to make decisions about your data. Perhaps the goal of this was to mask the data in pre-Production copies of your CRM system? Well, data doesn’t just exist there in isolation… or maybe it does! You don’t know until you have a record of everything. Every column, every table, every database, every server. There should be a record, tagged with a reasonable value to indicate at the very least the following 4 things:

  1. What system it is used in
  2. Who is in charge of the stewardship of it (i.e. who should be keeping it up to date / ensuring compliance)
  3. How sensitive it is
  4. What kind of data it is

Obviously other values can be specified, like your treatment intent (masking, encryption, both etc.) or your retention policy for specific fields, even something like identifying how the data moves in-out-and-around your estate (data lineage) but at the very least it needs to highlight where the hot-spots of sensitive data exist and what is most at risk. From this you can derive the insight needed to start masking data.

Notice this step is before any specific masking steps, that is because we should have a company wide resource to reference our structured data regardless.

2 – Scope your attack, identify the ‘hit points’

Once you have a record of everything you can begin with the systems that are going to be the most at risk / which databases you should start with. As part of the new project you’re working on, you may need to mask multiple tables or even databases but the size and scale of this process is currently stopping you proceeding.

Identify the key tables or even databases you will need to start with – it is so rare to come across databases where sensitive information is perfectly evenly spread across every table (I’m sure there are some, of course, but many will have concentrations of data) – these ‘usual suspects’ will be in contact, account, address, payment info tables etc. and where we have multiple tables with duplicated information, we want to select the biggest behemoths for this task. The reason for this is that instead of trying to mask all tables equally, you’ll want to mask the biggest sources of truth and then “fan out” your masking, linking them back to the masked tables. Not only is this process quicker but it also allows our masking to be consistent and believable. Draw out diagrams top we can keep the actual masking to a minimum (i.e. just these hit points) and then identify where you can fan this out to. Use these almost as you would stories in development.

This approach keeps the effort of writing scripts of configuring masking rules to a minimum but keeps the impact high – preventing us from wasting time trying to mask a table which, for instance, may only hold Personally Identifiable Information by correlation with other data. Ultimately, once you have this (now hopefully smaller, more focused) list, it’ll be easier to define how you want the masked data to appear.

3 – Write. Test. Validate. Record.

Once you have an understanding of your estate and an understanding of what you’re going to be masking, it’s time to start getting some indicative masked values mapped across (names for names, dates for date of birth etc.) but this is not a big bang approach.

Just like software development this process is best done iteratively. If you try to write one masking script or rule set you will gain 2 things:

  1. A thing that only you know how it works
  2. A thing that only you can maintain

Start with a copy of the database. Restore it to somewhere locked down that only a very select few have access to. Build the first part of your script to reflect the first of the ‘stories’ you identified in 2). i.e. “Mask the names to be something believable”. Does this work?

No, why not?
Yes, perfect.

You should even ask for feedback on the masked data from developers, they will be working with it so it makes sense to ensure you collaborate with them on what they would like to see.

So it’s time to record what you’ve done, just as anyone would do with any other software process. If only there was some way for us to do this… oh wait!

PUT IT IN SOURCE CONTROL.

It doesn’t matter if you’ve written a script or created part of a Data Masker masking set, it should now be put into source control, properly commented, and stored so that it is easily accessed, updated and implemented. Grant Fritchey actually wrote a great article about doing this here!

Now build up those rules, tackle your stories and gradually try to create enough to mask each of the ‘hit points’ and then fan them out to the secondary tables. Once you have kept this effort minimal but high impact, you’ll be able to try this in earnest. Once you have tackled your high risk targets then you can add stories to add specific test cases or oddities required in Dev and Test.

The point is to start somewhere, like I said, getting started is the hard part, but once you know what you have, where it is and how it’s structured, how you actually mask the data is a breeze.

New Years Resolutions: Be more good.

“Work on your strengths, not your weaknesses. How many of your New Year’s resolutions have been about fixing a flaw?”
Jonathan Haidt

On New Years Eve just after midnight, my wife asked our teenager and I what our New Year’s resolutions are and that got me thinking. I’m famously very very bad at this.

Like. Really bad.

My resolutions in the past have always been about fixing a flaw, which is why I like this quote and why this year I wanted to change things up a bit. My wife’s advice, naturally because she is a superstar and very good at setting herself goals, was to set S.M.A.R.T goals (more on those here, pretty good for meetings!) so that by the end of the year I could measure success.

But I’m going a slightly different direction with 2020. It’s a big year and I would like to change for the better this year, but I should point out early that I’m happy. I’m happy with how I am, who I have in my life and my work. Contentedness is not to be feared, it doesn’t mean that you have just accepted a fate and you’re going to sit back and let it happen, and it’s not to be underestimated because it means that you can more easily fight the stresses of everyday life. I realize I need to lose some weight, but that’s not really a NYR, it’s a lifestyle change down purely to how I eat/drink and move.

So after some thought, here are my 3 New Year’s resolutions:

1 – Spend more quality time with those who mean the most to me

I have the most amazing family. My wife is incredible; people meet her thinking I’m nice and then they’re like “Ooh Chris is an OGRE by comparison!”. Our teenager is wonderful; she’s smart and driven and just generally going to smash 2020, of that I’m sure. My sisters, mum & step-dad, dad & step-mum, best friend and my close group of friends enrich my life and without them I would be no where near as happy and fulfilled as I am today.

I want to make sure I prioritize time with all of them when making decisions, but that’s not to say I’m going to be able to spend a lot MORE time with them necessarily – I will certainly try, but I want the time spent with them to be quality time. That means putting my phone down more often, listening better and offering help where it may be needed, making a conscious decision to work on my relationships and to think more about what I’m saying and doing to try and make their lives as wonderful as they make mine.

So this decision isn’t SMART in the sense it’s not really measurable. But it’s a conscious choice I’ve made and one I have already started to work on.

2 – Do more of what I love / be good to me

This may seem in direct conflict (or indeed the same as) the previous resolution, but it is in fact very different. The goal of this resolution is to take some of the time that was otherwise “dead zone”. You get 168 hours in 1 week – take away 56 hours for sleep at least, 40 hours for work, 10 for commuting and 2 hours a day (14) for sundries like showering, getting ready, food prep etc. and what are you left with? 48 FULL hours! What do I even DO with that?

I don’t even know.

But it’s time that changed. I’m going to do more of what I love, like blogging, cooking, walking, playing video/board games, learning Romanian and spending time with friends / calling loved ones. From now on, I’m going to walk in the door from work, or wake up on weekends and I’m going to ask myself how long I have before bed and how I want to utilize that and it’s going to be something that ultimately makes me smile.

Again, not super measurable, but a concerted effort to choose to be happier is going to do a world of good for me and everyone around me.

3 – Be a better human

Ok, this one is kind of an obvious one but I feel if you don’t specifically address it and call it out then it’s almost like you’re not taking accountability for it.

I’ve made concerted efforts over the past year to be vegan, primarily on the grounds of trying to lead a cruelty free lifestyle, but the one animal I want to help this year more so than ever (and we jolly well need it) is ‘people’.

As we enter 2020 we enter an era of unprecedented change. The climate is at crunch point, Brexit is causing a huge amount of uncertainty and people are struggling to be happy, healthy and loved.

So this point is a very simple one. I want to take every opportunity to:

a) Make a climate conscious decision – greener choices from removing single use plastics to ethical, sustainable foods and beauty products.

b) Volunteer for and donate to more charities who are helping the planet or helping those in need and don’t necessarily have the ability to help themselves

c) Make someone smile. Simple as. Try to spread some of the happiness wherever possible.

This is perhaps the simplest resolution in terms of scope, as every action is easily definable, but it’s going to the the hardest to enact. So that’s why this isn’t a SMART goal, because if I manage to do at least a little bit of the above, certainly more so than I have done in previous years, then I’m moving in the right direction and adding a little positivity and sparkle to the world.

So those are my resolutions

I decided I don’t really want them to be measurable because I don’t want to compare me as I am to my ideal version of myself. The moment I compare me to how I wish I was, I feel regret or shame for not achieving that ideal, two things no person should feel, because we should focus on our triumphs. If you have struggled with shame in the past, I highly recommend this video from Brené Brown.

Happy New Year! So. What’re your New Years resolutions?