Web Security

Phishing attacks

By | Best practices, Web Security | No Comments

Phishing attacks are an example of social engineering, a technique which utilizes psychological manipulation to lure people into performing unwanted actions or disclosing confidential information.

Most of the times, phishing attacks trick users into entering their private information in fake websites that impersonate legitimate ones, copying their look and feel. Usually, such an attack is performed through a link to a phishing website that can be found in a phishing email, that is an email which tries to lure the potential victim into accessing a phishing website.

Also, a link to a phishing website can be found in the results of a search engine, especially in the ads / promoted results section. For example, a lot of times scammers have used Google’s adwords to promote their phishing sites by paying Google to display their sites on top of the search results. You can read about such a case in [5].

Phishing website properties:

Phishing websites usually try to impersonate legitimate pages that host some form of data input. For example webmail, social networks, internet banking etc.

We need to be extra cautious when accessing that kind of sites, and make sure we are visiting a legitimate page and not a phishing one. A phishing page can have some interesting properties that can give away the malicious intentions of the website.

  • Non-standard look and feel.
    • If we are familiar with a website, for example the login page of our internet banking or our webmail, and we notice something different e.g pop-ups, images, text or site functionality, that should alarm us.
  • Domain mismatch.
    • Usually, phishing sites will be hosted on a website with a somewhat similar domain name to the corresponding legitimate website. The fake domain could differ slightly from the original so it can deceive users that briefly look at them. E.g. instead of, or even very sneakier examples like [3] and [4].
    • Provided that we know the correct domain name of the legitimate website, we should check that the domain name in the address bar of our browser matches exactly to the one we know. For example is not Alpha Bank’s domain ( is the correct).
    • To clarify, when checking for a valid domain in a URL we care about the host part that can be seen in the image below.
      URL breakdown
    • A URL can consist of multiple components. If we are unsure about the actual host we can use a tool like [1] to distinguish the real target domain. For example below you can find some URLs that can be deceptive and the actual target domain (which may be different from the one we initially think).
    • Source: [1]
    • The host part of a URL can also be an IP address instead of a domain name which at most times should raise a red flag.
  • Non-secure connection.
    • If the page we are accessing is not served through https we cannot trust that we are actually connected to the website we think we are. Even if the domain name in the URL bar is correct, the website cannot be trusted if we access it with plain http.
    • Modern browsers provide a visual sign for secure (https) connections. Usually it is a (sometimes green) padlock in the address bar right next to the URL. However, Google Chrome will eventually drop that visual sign for secure connections and mark all plain http connections as not secure [2].

It cannot be stressed enough that domain checking and a secure connection are equally important. We cannot trust a website whose domain we don’t recognize regardless of the existence of a secure connection. Similarly, we cannot trust a website with a “correct” domain name in the browser address bar if it does not provide a secure connection (https, padlock, etc).

Phishing email properties:

  • Sense of urgency,
    • the email may have an ‘angry’ tone to spark a sense of emergency to the recipients and get them to act with haste.
  • Recipients are offered something they did not ask for, an invoice, anything free, …
  • Popular trends/ local news originating from unknown senders

Phishing emails will often use this technique to get people to click or download attachments.

What to look for:

  • Links point to suspicious sites,
    • to check where a link points to, hover over it.
  • The sender and/or other recipients are unknown,
  • The sender’s email looks suspicious,
  • Grammatical/ Spelling/ Syntax errors,
  • Attachment file type is unexpected,
    • jar, exe, …

Best practices:

  • Use bookmarks for frequently used important websites like email, internet banking, social networks, etc. Do not search for them on a search engine and click on the results. Do not click links from suspicious emails either.
  • Once you visit a website, always check its domain name in the address bar and be absolutely certain that it matches the one you know is correct.
  • Always check that you are securely connected to the website. That is, through https instead of plain http. Look for the padlock sign in browsers that support it.
  • Do not share information with a site that does not normally ask for them.



Content Security Policy

By | Cross-site Scripting, Web Security | No Comments

Content Security Policy (CSP) is a security layer that all modern browsers support with minor differences in regards to how it is being handled [1]. CSP was designed to application attacks such as Cross Site Scripting (XSS) [2] and clickjacking. Notably, such attacks have been popular for more than ten years.

A developer can employ CSP to restrict (in a whitelist / blacklist manner) the sources from which the application can receive content. Such content involves several elements:

  • frame-src: which specifies valid sources for nested browsing contexts using elements such as <frame> and <iframe>.
  • img-src: Valid sources for images and favicons.
  • script-src: Valid sources for Javascript.
  • style-src: Valid sources for stylesheets.

For a complete list of all available directives refer to the following citation: [3].

If the source of the element (e.g. script or image) is not trusted the policy will reject it on the browser’s side. Also alongside the whitelisted domains, a developer can define the protocols (or schemes as they are called) that are allowed for pulling content from the whitelisted domains.

Using CSP

Browsers that implement the CSP, expect to find an HTTP header in the response. That header is called Content-Security-Policy. Policies provided with this header are semicolon separated as we’ll see later on. Most reverse proxies provide the ability to add headers at will and this is how CSP is implemented by these web servers.

For example, for NGINX, one should add the following:

 add_header Content-Security-Policy "script-src 'self'"

Some Examples

Example 1

Content-Security-Policy: default-src 'self'

The above indicates to the browser that all content coming from the current domain is allowed, for all types of content. NOTE: This excludes subdomains.

Example 2

Content-Security-Policy: default-src 'self' *

Expanding on the previous example, here we allow and all of its subdomains

Example 3

Content-Security-Policy: script-src 'self'

A more real-world example, allows javascript from the current domain and a known CDN for pulling your angular / react js from.

Example 5

object-src script-src 'report-sample' 'unsafe-eval' 'nonce-gTWEiUr6jJYUSdaZ17xO8Q' 'unsafe-inline' 'strict-dynamic'; 
object-src 'none';
base-uri 'self';

The above is a CSP for The policy is quite loose for browsers that implement CSP2 and more strict for modern browsers that support CSP3.
For more information, read up on citation [3].

Before Production

CSP is meant to break things if not implemented correctly, for example, by default inline javascript in traditional <script> tags is not allowed. For this method to work, one has to provide:

script-src: 'unsafe-inline'

Additionally, custom event handlers and JavaScript: URLs are also banned by default. On another scenario, let’s say a developer adds another javascript dependency from another CDN that is not whitelisted but forgot to update the CSP. The script will be allowed in a local/development environment where CSP might be disabled, but fail to load on the production deployment.

Since CSP is hard to get right and we do not want to deploy such policies on production without testing them, CSP provides another header that allows such testing to occur. The header is called Content-Security-Policy-Report-Only and allows us to load all content without blocking anything, but instead report on CSP violations. This is harmless for any environment that deploys a CSP policy and allows us to debug on production.

Using the above header you can either see the errors in your browser’s console, or enable reporting.


On both Content-Security-Policy and Content-Security-Policy-Report-Only a directive can be used to tell the browser what to do on CSP violation. The directive contains a URL where the browser will send a POST request, containing all information about the violation that occurred [4].

Caveats and hurdles

Nowadays development is rapid, and most people deploy many times during the day, putting aside whether this is the right way of doing things, some times developers need to allow unsafe-inline scripts. If that is the case, we recommend going for the nonce-based route. All that is needed for this is to provide an unguessable nonce with the script tag as shown below:

Content-Security-Policy: script-src 'sha256-B2yPHKaXnvFWtRChIbabYmUBFZdVfKKXHbWtWidDVF8='

This way, even if an adversary injects a script tag, they would have to guess the hash provided by the backend. Note that, the nonce gets generated in a per-request manner, otherwise it would be trivial for the adversary to make a request, find out what the nonce is and use it later on to attack the application.

As mentioned above, deploying CSP is hard [6], it needs maintenance and can be a headache for both developers and operators. Hence, a question arises, who is responsible to maintain the CSP? Developers? Operators? Security engineers? There is no clear answer as it concerns all three actors. Developers will have to decide what policy is needed for an application, as they will add and / or remove dependencies. Security engineers have to provide developers with the right information in order to help them make an educated decision and also review the CSP. Finally, operators might assist in monitoring CSP violation reports when deployed on production.

Relatively recent research has shown that CSP is nearly impossible to get right [5] and also proved that 94.72% of real-world CSP deployments could be bypassed. CSP 3 that is currently a working draft, contains new methods to implement CSP based on cryptographic nonces and not whitelisted domains. Some of the new directives are currently implemented by most browsers. One of them is strict-dynamic that says to the browser, trust only the following nonce (cryptographically secure hash) whether it is an inline or an external source.

One thing is for certain, CSP is another layer of security, and should only be considered as one. We shouldn’t rely only on CSP and deploy other measures to detect and fix the vulnerabilities although CSP can in some cases limit exploitation. So audit your codebase, fix issues and use CSP with caution.