If you have one physical processor on board, you really want to configure it with 1 processor and no more. A conservative approach to running one virtual machine under one host would be to go with 1 processor and 3 cores, leaving your host system with one core. This leaves the host system with one dedicated core, and the potential to use the HyperThreading of the other cores for an extra advantage.
Another option would be 1 processor and 4 cores. If that works for you, I would recommend it. Workstation is fairly adept at not killing all of the processing power on the host machine. Giving it the real numbers, especially on an i7, shouldn't hurt you. It depends, also, on the type of processor usage you expect to have and the number of virtual machines running concurrently. If you are feeling like maxing out everything at the expense of even the host machine, you would want to go with 1 processor and 8 cores. On my i7 975 Bloomfield, I simply set it to 1 processor and 4 cores, but I have one VM running at a given time, usually.
If I were to know I was running, for instance, 4 VMs at the same time, as you apparently are, I may consider limiting each workstation to 1 processor, 1 core using division. However, even if you set all 4 VMs to use 1 processor and 4 cores, the resource allocation in Workstation would be capable of sorting it out for the most part.
Remember, that using extra processors has been designed for server motherboards that actually support more than one physical processor. You can get away with configuring your VM for multiple processors, for example 2 procs and 2 cores, but it may not be as efficient as just sticking to the basics. It is important that hypervisor has a real reflection of the conditions on the host computer to allocate to the virtual machines.
Are you running each VM on a dedicated HD? When these VMs are on, you should go into Resource Monitor on the host machine and see what is going on here - as it is very possible that you are maxing out disk I/O or memory. Use this as a baseline to see what the real bottleneck is. Bearing in mind that running 4 operating systems on one hard drive would be the same as if you could somehow do it without virtualization, you are always going to get hit by a curve ball, so to speak, unless you have allocated memory, hard disk space, and taken all of this into account. For example, I would not recommend running all 4 VMs on one hard drive at all. You want to store each VM, or even two VMs if you are shy on space, on different physical drives. The reason for this is quite simply that disk I/O is still the major bottleneck even on non-virtualized systems.
It is safe to say that you can come achieve VM performance that is close to the host system, but by its very nature, your virtual machines will never be as fast as the host. From a mathematical perspective, it is just not possible. The better hardware gets, of course, the less you will notice that difference, and the less weight that importance has on how well you need to conserve your system resources. Test your configuration by launching all 4 operating systems at the same time, while Resource Monitor is open on the host. This will give you some idea of what is spiking. I am willing to wager that it is a problem with disk I/O or RAM.
Remember that if you are using each virtual machine in a limited role, you usually do not need to allocate as much RAM as a physical system. Your Windows XP and Linux systems can get by, even on 128MB-256MB of RAM, when they boot up with nothing installed. With Windows 7, you will want to look at allocating just a bit more, perhaps 512MB-1GB as a baseline.
By far, lack of RAM and dedicated HD storage is what creates a big slow down when you are instancing multiple VMs, as this has been my experience and that of many others. Your processor has hardware virtualization compatibility built-in and can handle a lot of floating point operations. I would be surprised if this is really the problem, unless you are running CPU intensive applications on all four systems.