Content Security Policies

Content Security Policies

This post is part of a series on HTTPS and browser security; it is partly to spread knowledge, but mostly to allow me to learn more about the subject by putting it ‘down on paper’! Enjoy, and please comment, correct, and discuss.

In the last post of this series I wrote about HSTS; like HSTS a Content Security Policy is a browser header (or a tag, but more on that later) which can be used to improve a websites security footing.

What is a Content Security Policy?

The easiest way to explain a Content Security Policy (CSP) is with the idea of a whitelist; whitelists act as an allowed set of values for a system. You may have heard of a blacklist before; a list of things which are not allowed, you employer/school will almost certainly have a blacklist of websites you are not authorised to visit (naughty or dangerous ones). A whitelist is the opposite; to use the website blocking analogy a whitelist would contain only the websites you are allowed to access (a much more restrictive setting than a blacklist).

A CSP outlines the resources which a website may use; this whitelist prevents any unauthorised or unexpected resources from being used on a website. These resources may be something as simple as a CSS or JS file served from your server, but they could also be dangerous injected javascript or hijacked third-party resources. A CSP is the first step towards mitigating the risk of unauthorised or unexpected page resources.

Whitelisting content sources

The CSP header is a simple list of resource types and the locations that are authorised to serve them. The most basic of CSPs would be:

Content-Security-Policy: default-src 'self'

This policy states that the website should only load resources from its own URL path (in the case of this page that would be Any resource outside of that path (, etc) would be blocked by the browser. Although effective this policy is very restrictive; few sites only use their own resources and serving everything yourself is not the most efficient mechanism anymore (think of CDNs, Cloudflare, etc).

The CSP definition contains a number of restriction types, or directives, which can be used to fine-tune your whitelist.


There are a number of directives which can be used to tailor your websites whitelist the three obvious ones are:

  • default-src: The default directive defines the fallback list of sources; used in the event that you do not specify a specific directive.
  • script-src: The script source is perhaps the most obvious directive; it defines the list of sources which can load script files (javascript), including the use of inline scripts and the ‘eval’ command. By default inline scripts and ‘eval’ are disabled.
  • style-src: The style source defines which sources can load stylesheets (CSS files), including the use of inline style tags and style attributes. By default inline styles are disabled.

In addition to the above three directives there are also directives for images, fonts, connect sources, objects, frames, and several others.

CSPs do not just act as restrictions for resource sources; there are also a number of directives which can be used to upgrade or improve the security of a website such as:

  • require-sri-for: This attribute causes a browser to only load scripts/styles which have Sub Resource Integrity attributes set (more about them in a later module).
  • upgrade-insecure-requests: As you would expect this directive encourages the browser to switch any HTTP requests into HTTPS requests where possible. For a full list of the directives and a playground for creating a CSP header, I suggest taking a look at the CSP Builder provided by report-uri. is a fantastic resource for anyone using a CSP on their website; not only do they help you to get to grips with the policy but their services allows you to monitor how your policy is enforced.

Testing for and reporting on policy violations

Setting a policy without testing it would be a mistake; you may end up breaking your website without realising (you would have to test every page in every potential scenario to be sure it was 100% sure). Luckily there are two easy ways to review your CSP:

  1. The browser console; all web browsers contain a console which script/etc errors are logged to. The console will list all the violations as they happen; a great way to test your policy on the fly. However not a great way to ensure your website works (other users will see a broken site whilst you are debugging it(.
  2. The report-uri directive allows you to specify an endpoint to which the browser will send violation reports. The report-uri directive can be used in tandem with the Report-Only header which means that the browser will not actually enforce the policy; it will just report on the violations. It should not come as a surprise that Scott Helme’s also hosts a reporting endpoint (he did well getting that domain name).

Why use a Content Security Policy?

If your website only serves static HTML and uses no external elements then a CSP is unlikely to add much to your site. That being said it will also be easy to implement using the ‘default self’ rule!

However if your site uses external scripts over which you have no control then you should be using a CSP; or if your site allows users to enter information that is then displayed (comments, reviews, etc) you should also be utilising a CSP as an extra layer of defence against persistent XSS (cross-site scripting). If a user carefully crafts a comment to contain a piece of javascript and that comment is rendered back into the page a strongly controlled CSP will prevent the code from running (google ‘CSP nonce’ and ‘CSP hash’ for ways of dealing with inline javascript).

A well crafted and strict Content Security Policy, used in tandem with other best practices, will significantly reduce the risk of cross-site scripting (XSS) attacks.

Using a meta tag

I mentioned at the top of this post that a CSP can also be a TAG; not everyone has the ability to edit their browser headers. On shared hosting platforms, you are rarely given the ability to directly control the web server; some platforms such as GhostPro do not allow you any control over the server side configuration. The use of an HTML meta tag can help you to implement a CSP without having to set the actual browser header. The CSP “code” is the same as that for a browser header; the only limitation is that you can not use the report-uri feature to send failure reports. You can, however, look at’s JS which will perform the failure reporting for you!


In summary, if you run a website which presents dynamic content (be it a large corporate system, or a simple blogging/commenting platform) then you should also be using a Content Security Policy. It should be restrictive and ensure only expected and authorised hosts can be referenced in by your site. You should also make use of the report-uri functions (either self-hosted or using Scott Helem’s to ensure that you do not cause errors on your website.

Read more

Passwords must be secure, you can take that to the bank

Passwords must be secure, you can take that to the bank

There is an old British saying “you can take that to the bank”; it means that the speaker believes something to be so truthful that the bank would accept it. It is believed to go back to when a cheque could be written on anything (it was simply a statement of intent) and it could be counterfeited with ease, but if it was definitely truthful then it could be “taken to the bank”.

Password security is critically important especially in the world of finance, and you can take that to the bank!

I went to my bank today (National Westminster Bank, or Natwest to us in the UK) to exchange some leftover Norwegian Krone, the most shocking thing I witnessed was not the exchange rate (I ended up with less than I started with after just a week). You guessed it; the most shocking thing I witnessed related to password security. At first look the security of my bank is rather impressive:

  • There are big bars on the doors, and the walls are about four foot thick!
  • The big thick glass between the public and the teller’s money drawer (a teller is a person who works at the counter).
  • You must verify your identity with your card and pin before you start a transaction/conversation with the teller.
  • My online account has two levels of a password, a random username, and a third factor for creating new events (paying a new person, changing settings, etc).

Unfortunately, this all falls apart in a way that the customer doesn’t normally get to see. If it had not been for a problem on the teller’s screen I would never have known about this failing and gone away happy with my “proper English money”.

Part way through my transaction the gentleman behind the counter informed me that he “had been logged out and will have to start again”; how odd I thought. He then informed that it “happens all the time” because they all use the same password.

A photo of screaming woman I presume from the sentence “we all use the same password” that they also share the same username, else it is a hell of a coincidence. When someone else in the branch (I really hope it is one username per branch and not one for the entire firm) logged into the system the teller was kicked out and had to start again.

It seems that the system in question was a “separate application” and not part of the core banking applications. From the replies on twitter to my shocked tweet (thanks to Troy Hunt’s retweet) I have been informed that it would be a major breach of banking regulations if they shared accounts for the main systems; that being said encouraging password sharing for any system is just wrong and even more so in an industry such as the financial sector.

I would not be surprised if the account is shared simply because there is a licence fee for the third party system they are using; sadly this is something we see all too often (I have worked at firms which do this on a regular basis).

I flagged this problem up with Natwest and they quickly came back to me for details, so hopefully they will sort out the problem and password sharing will become a thing of the past. Either that or the teller in question will get told off for letting me find out (I have told them that better not happen)!

The header image for this post was provided by @visuals_by_fred via The screaming lady image was from @gmat07 also on Thank you both Gabriel and Freddie.

Read more

Azure SQL Connector for the Azure Key Vault - Error 2058

Azure SQL Connector for the Azure Key Vault - Error 2058

I spent today in a session with our external SQL Advisor; we have been working on provisioning a set of SQL Servers in Microsoft Azure. These servers will be using SQL Server TDE, which is a total database encryption system. I will not go into details of how this works, or what the setup is; however I will explain a problem we had in the hope that someone else will read this article and not spend an entire day trying to work out the cause!

Key with name ‘SOME_KEY_NAME’ does not exist in the provider or access is denied. Provider error code: 2058. (Provider Error - No explanation is available, consult EKM Provider for details)

The above error message was presented to us when we tried to create the asymmetric key for the server. According to the official set of error codes, error 2058 does not exist! What really confused us is that we had three other servers connect without a problem; those servers were created last year. The fourth problem server was only created this month; can you see where I am going with this?

It turns out that there is a bug in the February 2018 release of the SQL Server Connector for Microsoft Azure Key Vault that was released that month (version 15.0.300.96). We had used a previous release of the installed on the first three servers.

How to fix Error 2058

The Feb release contains a requirement for a new registrary key; nothing has the rights to create that key (SQL Engine, connector, or the DLLs). The, unfortunately, the workaround is to create the following registry key:

In the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft node create a key “SQL Server Cryptographic Provider”

Once you have created the key grant full permissions on the key to the account which runs the SQL Engine Service. You should now be able to access the key vaults and create your keys.

Or of course, you could do what we did, and use the old version of the installer (patching is a problem for future me).

The header image for this post was provided on by Thomas Kvistholt, thank you Thomas!

Read more

Why I hate Path.Combine

Why I hate Path.Combine

As most .NET developers will know there is a Path.Combine() method in System.IO which can be used to (you guessed it) combine two file paths. Unfortunately, it sucks; it sucks bad.

some examples of Path.Combine use

As you can see it functions just as you would expect in the first three lines; but it sucks on the last three. Why would Microsoft not implement a path separator check; adding or removing the separator where applicable? A very good question in my opinion; so I have my own implementation.

using System;
using System.IO;
using System.Linq;

public static class Pathy {
	public static string Combine(string path1, params string[] paths) {
		return paths.Aggregate(path1, Combine);
	private static string Combine(string path, string path2) {
		char spliter = Path.DirectorySeparatorChar;
		if (path == null) {
			throw new ArgumentException("Base path can not be null", nameof(path));
		if (path2 == null) {
			throw new ArgumentException("Sub path can not be null", nameof(path2));
		path = path.Trim().TrimEnd(spliter);
		path += spliter;
		path += path2.Trim().TrimStart(spliter);
		return path;

Pathy.Combine() takes two or more paths in the same way that Path.Combine() does and correctly merges them based on the default Path.DirecotrySeperatorChar as used by the current environment.

Feel free to use and abuse this bit of code; it is provided with no warranty or guarantees. You can also find it on  GitHub.

The header image used on this page was provided for free by Mike Enerio via thanks Mike!

Read more