My new e-book “Deep VMware™ Guest Tools and Guest-Hypervisor communication” at Amazon

Just published my new e-book “Deep VMware™ Guest Tools and Guest-Hypervisor communication” at Amazon.

Check it out.

Most virtualization platforms provide some sort of mechanism of communication between the the hypervisor and its guest virtual machines. “Open VM Tools” is a set of tools that implements such communication mechanisms for VMware™ virtual machines and hypervisors. In this book we analyze each of these these tools and APIs, from high-level usage to low-level communication details, between the guest and the host. This information can be used for a better understating of what actually happens when using a guest machine with these tools. It can also be used as inspiration for using and extending guest-hypervisor communication and penetration testing.

Screen Shot 10-05-17 at 12.44 PM

cover.jpg

Advertisements

Traveling Ruby (multi-platform portable Ruby binaries)

Traveling Ruby consists of a set of multi-platform portable Ruby binaries, which can be used to distribute Ruby-based products and run them even in machines where Ruby is not installed. It’s very useful, as you can also use it to pack multi-platform applications.

You can check the project’s home page here:

Traveling Ruby is a project which supplies self-contained, “portable” Ruby binaries: Ruby binaries that can run on any Linux distribution and any OS X machine. It also has Windows support (with some caveats). This allows Ruby app developers to bundle these binaries with their Ruby app, so that they can distribute a single package to end users, without needing end users to first install Ruby or gems.

It can run on

  • Linux x86.
  • Linux x86_64.
  • OS X
  • Windows

 

 

Video:

https://vimeo.com/phusionnl/review/113827942/ceca7e70da

 

Nested virtualization using VMware hypervisors

Nested virtualization is the act of running a hypervisor nested within another hypervisor.

For example, it is possible to run a Nested VMware ESXi 6.0 hypervisor over a VMware Player 7 hypervisor:

nested-esxi-6-0-screenshot

We may need to make some changes to the Nested Hypervisor virtual machine configuration file, as described in (VMware: Running Nested VMs – VMware: Running Nested VMs ).

It would be interesting for a guest machine to be able to detect to be running over a Nested Hypervisor.

I haven’t found a direct method (virtual hardware-based) yet. Some network testing and MAC Address and ESXi services correlation could do the trick, when networking is available.

For example, consider the following NMAP scan:

# nmap -vv -sV --version-all 192.168.189.134 -p 443
Starting Nmap 6.47 ( http://nmap.org ) at 2016-01-22 10:45 EST
Scanning 192.168.189.134 [1 port]
(...)
PORT STATE SERVICE VERSION
443/tcp open ssl/http VMware ESXi Server httpd
MAC Address: 00:0C:29:BD:16:1F (VMware)

NMAP detects that there is an ESXi Server at IP 192.168.189.134 and that its MAC Address is 00:0C:29:BD:16:1F, inside the VMware virtual MAC address range. This indicates that this machine may well be a Nested ESXi.

More details in the full paper at VMware hypervisor fingerprinting Tool ( & Paper)

References

VMware hypervisor fingerprinting Tool ( & Paper)

Just published a new tool vmhost_report.rb (and a paper about it) for VMware hypervisor fingerprinting. The tool is released with an open source license (GPL), you can use it freely.

In the paper, I show you how to determine hypervisor properties (such as hypervisor version or virtual CPU Limits) by running commands in the guest operating system, without any special privileges in the host machine running the hypervisor.

This can be useful for penetration testing, information gathering, determining the best software configuration for virtualization-sensitive and virtualization-aware software.

I have developed a reporting tool vmhost_report.rb that unifies all the presented methods, by running them all in sequence and gathering the information in a useful report that can be run from any guest system. Currently, Linux and Nested ESXi are supported.

You can run it as “ruby vmhost_report.rb“. It will return a lot of useful information in the vmhost_report.log file.

These reports can be used to learn a lot about VMware internals or a particular guest system or network. You can find report examples in the Paper’s “Annex A”.

Some of the described methods can be used even if the VMware Tools are disabled or not installed, or if some of the methods are disabled by host configuration. Some of the methods require “root” privileges, while others do not need it.

Downloads

Screenshots

 

Anonymity tools: “whonix”(first impressions)

Just started trying “whonix”, an anonymity-hardened Linux distribution.

“whonix” is based on Debian Linux and uses TOR for all external connections.

Interestingly, it consists of two separate virtual machines, “whonix workstation” and “whonix gateway”. That machine where the user can perform anonymous tasks  (“whonix workstation”) is isolated from the external physical network and can only communicate with the internet via the “whonix gateway”, which relays TOR traffic to the TOR network and the internet.

The main design decision for “whonix” is that the user’s physical IP should not be disclosed even if the “whonix workstation” is compromised with some types of malware. Indeed, the “whonix workstation” does not have access to the external network, which is a very interesting concept.

I’ll be writing more about “whonix”, TOR and Anonymity soon.

Why Anonymity Matters (quoted from https://www.torproject.org/)

Tor protects you by bouncing your communications around a distributed network of relays run by volunteers all around the world: it prevents somebody watching your Internet connection from learning what sites you visit, and it prevents the sites you visit from learning your physical location.

Screen Shot 06-30-16 at 07.26 PM.PNG

Screen Shot 06-30-16 at 07.33 PM

References:

Hagia Sophia, Istanbul

Containers vs Hypervisors (Overview)

Containers seem like an interesting technology for many virtualization and cloud scenarios, specially for hosting providers looking to support more virtual machines or service instances in a physical host.

These are the main differences and advantages between containers and hypervisors

  • Physical machine resource usage and scheduling works better than in a Hypervisor because there is only one kernel copy.
    • Same kernel serving all containers
      • sliglthly different kernel versions can be used in the containers
    • OS virtualization instead of physical hardware virtualization
    • Only Linux is supported
  • Better for overcommiting than Hypervisors
    • More containers in a single physisical host
      • less OS copies running
      • More profitable for companies serving hosted applications (SaaS or PaaS)
    • Faster memory and CPU rescheduling from machines consuming resources guaranteed for other machines
      • faster and more transparent than “ballooning” in VMware
      • physical address space is shared by all the containers and handled by a single kernel
        • on a Hypervisor you get one more layer
  • https://coreos.com/
    • Minimal Linux distribution compatible with Containers
    • “CoreOS is designed to give you compute capacity that is dynamically scaled and managed, similar to how infrastructure might be managed at large web companies like Google.”
  • Very large companies like Google use containers and no hypervisors.
  • Solaris & AIX also have [incompatible] Container technology.

I think it has some problems, though:

  • “Overcommiting” is bad for real-time applications,  as events can be delayed and accumulate.
    • “James Bottomley” was really honest about this issue: “This is the way to cheat your customers”
  • The lack of “Windows OS” support can also be a problem for some people. It seems that only Linux distributions are supported.

If you can port to Linux and don’t have too many real-time restrictions, it looks promising.