Windows 7 Optimal number of processors for a virtual machine in VMware Workstation

nlovric

New Member
Joined
Apr 16, 2011
I have VMware Workstation 7.1.5 build 491717 running on an HP Pavilion dv7-4110em XE287EA#BED with an Intel Core i7-720QM CPU with 4 cores and 8 logical CPUs and Windows 7 Ultimate 64-bit Service Pack 1. The CPU works with 4 cores up to 1.6 GHz. In automatic Intel Turbo Boost mode, it works with 2 cores up to 2.4 GHz, and one core up to 2.8 GHz. I have never seen it work above 2.4 GHz.

What is the optimal number of processors and cores to configure VMware virtual machines with, running the following operating systems:
  • Windows XP Professional 64-bit Service Pack 2
  • Windows XP Professional 32-bit Service Pack 3
  • Windows 7 Ultimate 64-bit Service Pack 1
  • OpenSUSE 11.3 64-bit
My current configuration with 1 processor with 4 cores is slow relative to the speed of the operating system on the physical machine.
 
Last edited:
You'll probably get a better answer over on the VMware forums VMware Communities: VMware Workstation . They have some really knowledgeable posters there that can get into extremely complicated things. I've seen posts they couldn't answer and got a software engineer to post solution. I think one core is the usual recommendation.
Joe
 
Normally, with Windows OS's, I go with 2. Except versions like XP Home that can only use 1CPU.

Linux in general, doesn't require as much resources to run, so I give those OS's 1 core. I tried 2 cores with Mint 12, but could see no difference. This was using the free VMWare Player. VMWare Workstation has all of the features of VMPlayer + more, so I seen it as a good comparison.

Cat
 
It seems to me that it's best to have the number of cores above 1, seeing as how Intel i7 processors divide each physical core into 2 logical processors for better performance. I have a 4-core Intel i7 processor with 8 logical CPUs. What I'd like is to optimize the virtual machines for running the physical OS + 1 virtual machine at a time.

I have noticed a significant difference between 1 virtual core and 4 virtual cores with Windows XP Professional 32-bit, Windows XP Professional 64-bit, and Windows 7 Ultimate 64-bit. I am, however, unable to say if there's a difference with Linux; perhaps there is a difference somewhere in the order of 10 - 20%, but I haven't really paid much attention.
 
If you have one physical processor on board, you really want to configure it with 1 processor and no more. A conservative approach to running one virtual machine under one host would be to go with 1 processor and 3 cores, leaving your host system with one core. This leaves the host system with one dedicated core, and the potential to use the HyperThreading of the other cores for an extra advantage.

Another option would be 1 processor and 4 cores. If that works for you, I would recommend it. Workstation is fairly adept at not killing all of the processing power on the host machine. Giving it the real numbers, especially on an i7, shouldn't hurt you. It depends, also, on the type of processor usage you expect to have and the number of virtual machines running concurrently. If you are feeling like maxing out everything at the expense of even the host machine, you would want to go with 1 processor and 8 cores. On my i7 975 Bloomfield, I simply set it to 1 processor and 4 cores, but I have one VM running at a given time, usually.

If I were to know I was running, for instance, 4 VMs at the same time, as you apparently are, I may consider limiting each workstation to 1 processor, 1 core using division. However, even if you set all 4 VMs to use 1 processor and 4 cores, the resource allocation in Workstation would be capable of sorting it out for the most part.

Remember, that using extra processors has been designed for server motherboards that actually support more than one physical processor. You can get away with configuring your VM for multiple processors, for example 2 procs and 2 cores, but it may not be as efficient as just sticking to the basics. It is important that hypervisor has a real reflection of the conditions on the host computer to allocate to the virtual machines.

Are you running each VM on a dedicated HD? When these VMs are on, you should go into Resource Monitor on the host machine and see what is going on here - as it is very possible that you are maxing out disk I/O or memory. Use this as a baseline to see what the real bottleneck is. Bearing in mind that running 4 operating systems on one hard drive would be the same as if you could somehow do it without virtualization, you are always going to get hit by a curve ball, so to speak, unless you have allocated memory, hard disk space, and taken all of this into account. For example, I would not recommend running all 4 VMs on one hard drive at all. You want to store each VM, or even two VMs if you are shy on space, on different physical drives. The reason for this is quite simply that disk I/O is still the major bottleneck even on non-virtualized systems.

It is safe to say that you can come achieve VM performance that is close to the host system, but by its very nature, your virtual machines will never be as fast as the host. From a mathematical perspective, it is just not possible. The better hardware gets, of course, the less you will notice that difference, and the less weight that importance has on how well you need to conserve your system resources. Test your configuration by launching all 4 operating systems at the same time, while Resource Monitor is open on the host. This will give you some idea of what is spiking. I am willing to wager that it is a problem with disk I/O or RAM.

Remember that if you are using each virtual machine in a limited role, you usually do not need to allocate as much RAM as a physical system. Your Windows XP and Linux systems can get by, even on 128MB-256MB of RAM, when they boot up with nothing installed. With Windows 7, you will want to look at allocating just a bit more, perhaps 512MB-1GB as a baseline.

By far, lack of RAM and dedicated HD storage is what creates a big slow down when you are instancing multiple VMs, as this has been my experience and that of many others. Your processor has hardware virtualization compatibility built-in and can handle a lot of floating point operations. I would be surprised if this is really the problem, unless you are running CPU intensive applications on all four systems.
 
Well, I'm usually running 1 VM at a time. I've set up all VMs with 1 GiB of RAM, since I have 8 GiB of RAM. Sometimes, I'm installing / modifying another VM or two, keeping the first VM idle, but having the other VM or two utilize a lot of resources. I've set up all VMs with 1 CPU with 4 cores, because that's the real number of cores and the real maximal number of parallel cycles (4 × 1.6 GHz, 2 × 2.4 GHz, or 1 × 2.8 GHz, although I've never seen my CPU go over 2.4 GHz, regardless of what I do) the guest system can use - logical processors are just an illusion.

As for the I/O: Yes, that is the main problem with the whole machine, because my CPU doesn't have Intel Advanced Encryption Standard (AES) New Instructions (NI) - Intels' optimized AES Instruction Set - so even my physical machine performs ¼ - ⅓ less when doing disk I/O, because I have my entire disk encrypted with 256-bit AES in Extended Schedule with Ciphertext Stealing (XTS) mode. It has just occurred to me that, perhaps, VMware writes disk I/O in smaller blocks at various locations of the virtual disk, thus lowering performance in contrast to file I/O on the physical machine. All my virtual disks are pre-allocated. For Windows virtual machines, I use, at least, 1 snapshot (of the basic, clean, operating system, with Microsoft update patches and basic 3rd-party programs and their patches). However, this still does not explain the slowness of virtual machines, because they're noticeably slower than the physical machine even when there is almost no disk I/O, which is the situation most of the time.
 
Last edited:
Back
Top Bottom