Mitigation techniques for additional infrastructure

15 06 2015

The concept of securing ‘data-in-transit’ is a concept that will be familiar to all staff working in network security today. All secure data traveling across the ‘wire’ should be appropriately secured and this has traditionally been done based on the protocol that they are using, such as SSL/TLS for web communication or whole network encapsulation such as PPTP and IPSec. This issue with using this encryption is that they incur a considerable performance overhead when compared to the original method of transition. VMware has elected to leave the communication unencrypted and instead recommends the use of secure networks, which In the author’s opinion is so that all information is transferred in the shortest possible time, resulting in less disruption during migrations. This is another example of the balance of the confidentiality, integrity and availability triangle discussed throughout this thesis.
VMware have obviously considered the option to encrypt VMotion traffic in the past, as it was an option found under the advance settings of vCenter Server Settings as can be seen in Figure 1. Although the setting appears to allow the requirement of all vMotion traffic, there are examples of vCenter not honouring the setting and continuing to send the information in the clear text (Van Dirmarsch, 2009).

Screen capture from (Van Dirmarsch, 2009) of a setting in vCenter to enable encryption on vMotion traffic

Screen capture from (Van Dirmarsch, 2009) of a setting in vCenter to enable encryption on vMotion traffic

In the version 5.0 of VMware the option is no longer present, as can be seen in figure 2.

Screen capture from an earlier version of vCenter server showing that the ‘VirtualCenter.VmotionEncryption’ option no longer appears

Screen capture from a vCenter 5.0 server showing that the ‘VirtualCenter.VmotionEncryption’ option no longer appears

VMware leave no other realistic option for securing this information other than by isolating the vMotion network, restricting access and ensuring that promiscuous mode is not enabled when communicating through a vSwitch (Wu, 2008). While this might be suitable for networks containing less sensitive data, for the author it offers no option for any defence in debt strategy, other than disabling all automated vMotion features (DRS/DMP) and disabling vMotion at the switch (VMware, 2009).

Due to the risks that surround unencrypted vMotion traffic and VMware’s reluctance to offer any further protection, there are numerous best practice guides available online demonstrating the best methods to configure networks for multiple hardware scenarios. One of the more comprehensive guides is (Kelly, 2012) who demonstrates a number of examples which comply with VMware’s isolation requirements. An example of one of (Kelly, 2012) diagrams is shown in Figure 3 and shows the best practice vSwitch configuration for a six NIC host with isolated iSCSI storage requirements.

Best practice vSwitch configuration for a 6 NIC host with isolated iSCSI storage requirements (Kelly, 2012)

Best practice vSwitch configuration for a 6 NIC host with isolated iSCSI storage requirements (Kelly, 2012)

The example given by (Kelly, 2012) does provide the isolation that is suggested by VMware. However as is seen in Figure 3 the Management network and the vMotion network are on separate logical networks, separated only by a VLAN ID. This is not uncommon for VLAN’s to be used in virtual environments as using multiple physical networks to isolate these services would become both increasingly hard to manage and impractical. It is also impractical to map physical connections in and out of blade environments, without dramatically affecting the redundancy of the chassis, due to slot limitation. VLAN’s, while commonly used by numerous organisations to segment traffic, still carry the information over the same physical ‘cable’ and do not offer the same level of security as physically separated connections. There are tools available that exploit flaws in implementations of VLAN’ing and that allow VLAN ‘hopping’ through frame tagging attacks (Compton, 2012). Although these nested attacks may seem unlikely, they are still a valid threat that should be at least considered.

Van Dirmarsch, K., 2009. The Quest of Encrypted VMotion. [Online] Available at: http://virtualkenneth.com/2009/08/11/v/#more-7

Kelly, P., 2012. VMware vSphere 5 Host Network Designs. [Online]
Available at: http://vrif.blogspot.co.uk/2011/10/vmware-vsphere-5-host-network-designs.html

Wu, W., 2008. VMware Security & Compliance Blog. [Online]
Available at: http://blogs.vmware.com/security/2008/02/keeping-your-vm.html

Advertisements




Mitigation techniques for shared hardware

4 06 2014

The concept of mitigation techniques to reduce the likelihood of shared hardware attacks are similar to that of the hypervisor techniques in terms of determining which machines have access to which host. To ensure isolation of the hosts, the same measures can be used as described in an earlier post regarding hypervisors, such as using DRS groups and in larger cloud environments specific hardware-conscious options such as the ‘Dedicated VDC’ in VMware’s vCloud Datacentre. While host isolation techniques can be achievable using these methods, they only address physical component allocation at the host portion such as RAM, CPM, mezzanine cards/NIC’s etc. What these do not address however is the issue of shared storage and blade infrastructures.

Where ‘in house’ storage is concerned, separate groups of arrays could be used to reduce the implications an attack could have due to what systems they could affect. In the same way that DRS groups were created in the hypervisor demonstrations, machines could be grouped by security rating so that less secure machines are not placed on the same array as other higher valued machines. This mitigates the risk of a less secure system threatening the performance of a group of higher target machines due to vulnerabilities found in the VM. With this measure however there is also the consideration that in doing so, you are creating one high target area that could be attacked, affecting all the core services. The increase to security that segmentation of disk arrays creates unfortunately has an adverse effect on resource efficiency, as the smaller the array the higher the disk overhead (Shangle, 2012), (International Computer Concepts, 2012).

There are options within some VMMM’s (certainly within vCenter) to evenly distribute and limit the amount of IOPS a single machine is able to request. This can be set at the machine level using the resource allocation section, and set based on a per machine level. Within the vCenter suite ‘share values’ can also be set to individual machines and automatically limit disk allocation, should disk latency reach a certain threshold. The latter option is not included into the core functionality of the vSphere suite and therefore is at an additional licence cost. In the figure below you will see that the limit has been set to 1000 IOP’s for the ‘Public-Web’ virtual machine. While this option does stop the ability for one machine to overwhelm the entire storage, it can also unnecessarily restrict genuine requests from VM’s, should they experience a higher than normal workload.

Using the free IOPs limiting ability in vCenter

Using the free IOPs limiting ability in vCenter

 

The additional licencing cost of VMware’s ‘Storage I/O Control’ allows one to associate a share value with machines rather than a set resource threshold. A latency figure can be set on a LUN and should that threshold be reached vCenter will ensure that the machine with the highest costing will get the specified disk allocation required. To protect internally hosted environments, core servers would be given the highest costing while less importance, more vulnerable machines would be allocated lower figures to ensure that key functions of the business continue to function, should this type of attack take place.

In circumstances where these options are not available in the VMMM, such as is the case with the standard Hyper-V manager, which “does not have any built in mechanism to dynamically or even statically control storage I/O”  (Berg, 2011), alternative solutions will be required.

Avoiding sharing is undoubtedly the simplest option when securing high risk, mission critical systems. The challenge becomes more complicated when considering public cloud environments. Avoiding storage contention due to noisy or malicious neighbours on a public cloud service is one that should be seriously considered before any cloud adoption takes place. Many big companies are now using the public cloud infrastructure to host their services. Amazon has had impressive adoption rates with online services including Netflix (Buisiness Wire, 2010), Reddit (Berg, 2010), MySpace (High Scalibility, 2010) and many others now opting to migrate their entire business onto Amazons EC2/EC3 infrastructure.

One example of how resource contention can be avoided within a public cloud was undertaken by Netflix’s (Cockcroft, 2011), who extensively researched the inner workings of Amazons EBS (Elastic Block Store) so that they could best utilize the service and not be affected by neighbours’ disk requirements. (Cockcroft, 2011) discovered that Amazons EBS volumes were between 1 GB and 1Tb in allocated size and it was deemed that allocating volumes in 1TB blocks to the Netflix servers regardless of their actual storage requirements avoids the likelihood of co-tenancy and, in turn, storage contention. Amazon makes this a more feasible option to determine the sizing of disks as the whole EC2 service has a large amount of information available, especially when compared to other providers. Having access to this level of information should be a key consideration when planning any cloud migration.

The threat of an exploit at the blade hardware layer is an extremely difficult attack to mitigate against and one that cannot be achieved without taking unrealistic precautions that undermine the reasoning and benefits of a blade system altogether. While there may be scope within the larger blade systems to use separate physical interconnect modules to ensure that secure and insecure machines use different routes in and out of the enclosure, there is still the backplane of the chassis, which is completely shared among all hosts. As mentioned prior, hardware attacks at this layer of the system would most likely be DOS attacks rather than disclosure and have yet to be demonstrated.

Shangle, R., 2012. Level 0,1,2,3,4,5,0/1. [Online]  Available at: http://it.toolbox.com/wiki/index.php/Level_0,1,2,3,4,5,0/1

Buisiness Wire, 2010. Netflix Selects Amazon Web Services To Power Mission-Critical Technology Infrastructure. [Online]  Available at: http://www.thestreet.com/story/10749647/netflix-selects-amazon-web-services-to-power-mission-critical-technology-infrastructure.html

Cockcroft, A., 2011. Understanding and using Amazon EBS – Elastic Block Store. [Online] Available at: http://perfcap.blogspot.co.uk/2011/03/understanding-and-using-amazon-ebs.html

Berg, . M. v. d., 2011. Storage I/O control for Hyper-V. [Online]  Available at: http://up2v.nl/2011/06/20/storage-io-control-for-hyper-v/





Question on Security Now

3 05 2013

I totally forgot to post about this, but here is a question that I got read out on the Security Now Podcast July last year:

 

[Jump to 1:19:35 as WordPress is ignoring the #t extension for Youtube]





Does cyberterrorism pose a threat to societies: Introduction

24 08 2011

Since the events that took place at the World Trade Centre on 11th September 2001, the threat of terrorism has been elevated as a national priority, not only in America but also all around the world (Holt & Andwers, 2007, p. 1). The media coverage that surrounded the event triggered public awareness of the physical terrorism threats we may be susceptible to, while also highlighting the possibility of a new era of ‘cyber’ attacks.

Prior to exploring this topic, I would like to define the terminology upon which the next few posts are centered.

The term ‘Cyber’, according to the Oxford English Dictionary is defined as ‘relating to or characteristic of the culture of computers, information technology, and virtual reality’ and is abbreviated from the word ‘cybernetics’.

The Oxford English Dictionary also defines the term ‘terrorism’ as ‘the unofficial or unauthorized use of violence and intimidation in the pursuit of political aims’.

While these definitions represent common understandings of these terms, there appear to be numerous differing interpretations of the term ‘cyberterrorism’. Due to this and the nature of Cyberspace it can often be difficult to determine whether the illegal act taking place is actually ‘Cyberterrorism’ or cybercrime. (Garabosky & Smith, 1998)
This definition given by Dorothy E. Denning in 2008 is often considered as accurately encapsulating most interpretations of the meaning:

Cyberterrorism is the convergence of terrorism and cyberspace. It is generally understood to mean unlawful attacks and threats of attack against computers, networks, and the information stored therein when done to intimidate or coerce a government or its people in furtherance of political or social objectives. Further, to qualify as cyberterrorism, an attack should result in violence against persons or property, or at least cause enough harm to generate fear. Attacks that lead to death or bodily injury, explosions, plane crashes, water contamination, or severe economic loss would be examples…’ (Denning, 2000, p. 1)

When comparing the terminology to the traditional definition of terrorism, it becomes apparent that cyberterrorism is fundamentally a new way to achieve the same outcome. Terrorism of any kind can result in panic and loss of life, whether it is completed through physical or virtual means and while the means of attacks are very different, the outcome are typically the same.

Refs:

Holt, M., & Andwers, A. (2007). Nuclear Power Plants: Vulnerability to Terrorist Attack. Retrieved 03 31, 2011, from http://www.library.uow.edu.au/content/groups/public/@web/@health/documents/doc/uow025425.pdf

Garabosky, P. N., & Smith, R. G. (1998). Crime in the digital age: controlling telecommunications and cyberspace. New Jersey: Transaction Publishers.

Denning, D. E. (2000). Statement of Dorothy E. Denning. Retrieved March 1, 2011, from The Information Warface Site: http://www.iwar.org.uk/cyberterror/resources/house/00-05-23denning.htm





Third party software patching, the pink elephant in the room. Part 2

19 05 2011

Now over to the solution side of this argument.

An obvious and cost effective option over on the Microsoft Windows platform is our old friend ‘Group Policy’. The software installation section in the group policy management tool in all Windows Server suites has been largely unchanged since it was first introduced. This offers a no frills method of distributing software to machines based on the location of user or computer within their active directory containers. A well planned, designed and maintained active directory infrastructure gives administrators the ability to distribute software and patches across multiple sites centrally. Sounds like a no brain solution, ay? Well, unfortunately not.

The software installation tool will only distribute software that is packaged in the .MSI extension, and while the availability of .MIS packaged software is slowly increasing, it is far from complete. There are a number of sites that offer .MIS versions of popular software:

Adobe Reader

Firefox

Compared to the number of additional software that is undoubtedly installed on your computer, this will only account for a small proportion. There are tools that enable administrators to create custom .MSI packages for distribution, however the majority work user differencing engines that record changes in the registry before and after installation. This can cause installation errors when distributed to machines with dissimilar configurations, as software often shares common libraries and may also be operating at different patch levels.

There is still the issue of how to patch the bespoke in-house developed software, which is almost certainly not going to be available for download or easily packaged possibility due to size or dependencies. The solution to this could be in the depressing named ‘cloud computing’. In an essay I wrote last last year, I discussed this issue and how cloud computing could be the answer, here is a an extract from that paper:

“Companies can also be reluctant to update third party business critical software too, as it can lead in instability to users due to conflicts and dependencies on other parts of the system. A stability update to a system driver or security patch to the operating system could leave a company unable to function until the issues has been recognised and addressed.

SaaS (Software as a service) could be one way to alleviate this patching issue by moving applications off the user’s machines and into the cloud. Users are mostly unaware that the software is not stored locally, as they interface with the application through a web browser. The application then has only one dependency on the user’s machine, which is the web browser. Application patching can then be tested at length by administrators in a development environment before being patched centrally.

SaaS could be used in all environments to improve the security and stability of software, whether it is in SMB’s or large enterprises by leveraging the different placement of public and private clouds. An example of this would be a large multi-site enterprise using a privately hosted communal cloud to house their SaaS as is demonstrated in Figure 1.”





Third party software patching, the pink elephant in the room. Part 1

18 05 2011

I have always had issues with the deployment and maintainability of third-party software within today’s computer networks. In addition to a vanilla installation of Windows 7, there are a number of third party applications that are additionally needed to ensure not only security, but also a fully featured web experience. While these products are normally equipped with silent calls to allow for the unattended installations during setup, they often do not normally allow for the upgrade of already pre-installed software. This results in old versions of the software lying unpatched across large networks, leaving users and networks vulnerable to known attacks.

As reported in the latest annual report by Panda Labs:

“Adobe has also suffered a lot throughout the year, particularly with Adobe Acrobat and Adobe Reader. Attackers have concentrated their fire on these two products, as a simple way of infecting users.”

Microsoft and Apple have now taken responsibility for patching their host operating system, offering ‘free’ products like WSUS and Software Update Service as part of their server suites. These products manage OS patching by download, installing and reporting, all from a central location. However these products do not update any additional software, developed outside of their respective companies.

While the single application suites are becoming better at patching themselves, this tends to be developed for home users who have local administration rights on that machine and open access to the internet. An example of this is Adobe reader: Adobe now installs an update utility onto the machine as part of its Reader installation. This utility checks in with Adobe every (X hours) to ensure that it is up to date. If the software is not, the utility will download the latest software and prompt the user for installation. While this may seem like a reasonable solution to some, administrators of networks know that most domain users are forced through a web-proxy at browser level, meaning that direct access to the internet is (and should be) forbidden. Also the user is not (and should not) be able to make software changes to the machine.

There are a handful of products on the market today which will happily alleviate this administrative headache of updating the 95% of commonly unpatched applications for a large fee. While this large fee might be an acceptable expense for large companies, for most SMB’s, Schools and Colleges this cost could represent a large percentage of their yearly income/budget allowance. Even with a comprehensive software patching service in place, there is still the (although less vulnerable) issues of bespoke and uncommon software, which is not covered in that 95% of applications patched by these products. These can range from in-house developed web solutions to obscure paint and design packages. While the argument can be made that exploits for these products are exponentially harder to come by, depending on the outcome of the attack could justify attackers spending weeks or months reverse engineering the software. There is also the realisation that most software is not written from scratch and often uses common programming engines and object libraries, which could be vulnerable.

In Part 2 I will cover more realistic solutions and future options.





Bank Security vs Web 2.0

11 05 2011

It was a long time coming, but a few months ago Facebook ‘finally’ got their act together and enabled users the option of fully securing their sessions using HTTPS (Hypertext Transfer Protocol Secure), also known as SSL (Secure Sockets Layer). This may or may not have been pushed forward by the release of Firesheep late last year.

This means that Facebook users who are stupid enough to connect to unencrypted WiFi networks, can breathe a sigh of relief while publishing witty status updates, in their coffee house of choice. Facebook also appears to be in the process of offering two-factor authentication in the near future, to further protect the integrity of peoples Farmville data.

While this increase in authenticating the validity of users logon sessions pleases me somewhat, I feel that this security focus is not being taken as seriously by all other websites.

Im not stupid enough to give away too much key information on the web, but my bank offers me no other method of logon authentication other than a username and two passwords. If this wasnt enough of a crime in 2011, the passwords they do let you create cannot be more than 15 characters long! It seems strange that I live in a world where my social persona is more secured than my financial banking.

This limit on the password length also implies to me that my passwords and information are being stored in the banks database in clear text, as, if the password was being hashed, it would surely not matter about length. I would hope that all data relating to internet banking would be hashed and also appropriately salted, especially by larger banks.

With the ongoing saga that is the Playstation breach, I ask myself – what will it take for these companies to start protecting users person information, with the appropriate care that is needed.