96MB Low End VPS Review Part XXXVI – BuyVM KVM Series (Part 1 of 2)

BuyVM is perhaps one of the most frequently featured providers here – for a good reason that they sell really great servers at really cheap price. So far, almost all 15/year providers are unable to deliver a reasonably performed VPS for a prolonged period of time, except BuyVM. As a result, almost every time BuyVM added some new stock, they are sold out in relatively short time frame. Therefore, although it has been a few months since BuyVM has announced their new KVM product lines, I did not manage to get a hold of a few KVMs until recently.

This review will consists of two parts, both are dedicated to BuyVM’s KVM series, the first is the most budget 128MB KVM VPS and second part is for the mid-range 512MB KVM products. Why? Because many hosts use lower-range hardware and almost all hosts put more clients per VPS on their budget nodes. Therefore, the performance of the 128MB VPS could vary drastically with that of the 512MB VPS.

Basic Information and Set Up

Again, as per LowEndBox, here is what you are going to get for $25/year:

image

The VPS is located in San Jose, CA. Unlike their OpenVZ product line, KVM set up is not instant, and for the 128MB KVM VPS, there is an explicit warning at the sign up page where you can not upgrade this package any further.

The new VPS welcome email comes at 6:02pm, 34 minutes after I have received the payment confirmation email.

SNAGHTML7c3e2a

As you can see, unlike the OpenVZ product line, the KVM welcome email has a few interesting pieces of information:

1. The user name to access the SolusVM control panel is no longer your email address. Instead, it is an assigned user name starting with PonyVM, and I was told that because of the restrictions with KVM technology, they can not change my user name to SolusVM panel. As such, I have lost even the ability to log into SolusVM using my email address to control the OpenVZ VPS I have with BuyVM after the accounts were merged. It is a minor nuisance for most of the users though.

2. There is link to the wiki page, which contains the instructions on how to install many of the OS distributions and some common troubleshooting tips, which could be handy when needed. Note that the 128MB KVM VPS is not sufficient to run any version of Windows and Windows 2003 is the only Windows distribution that will work with 256MB of RAM. Furthermore, the CentOS installer probably won’t work with 128MB of memory as well. BuyVM seems to have some plan to develop some sort of burst RAM for the KVM VPS but there is no ETA at the moment.

3. Unlike the OpenVZ set up, for which the IP addresses and the network is set up while the OS template is reloaded, you will need to set up the network yourself. With a graphical interface installer that I have run for Debian, it seems to be a pretty easy task and the network was configured automatically without the need for any work on my end, and therefore I did not get a chance to test for the “hit or miss” part mentioned in the email.

Logging into the WHMCS system via HTTPS connection, you will be able to see the Control Panel for the KVM VPS in My Services:

SNAGHTML31ff018

The Options on the control panel seems to work, and 17 IP addresses (16 IPv6 + 1 IPv4) is indeed quite a long list!

Once logged into the SolusVM control panel via HTTPS connection and using the user name assigned, a pretty standard older version of SolusVM interface is shown:

SNAGHTML311896a

image

Note that where normally the button for emergency recovery console is replaced by the button to access the VNC viewer, which is really handy that is where you would need to install the OS. Note that by default, no OS is installed even though the OS distribution that you have chosen during sign up has been mounted (unless by Debian OS is mounted by default).

You can change a few things from the main page, such as the boot order:

image

There are also three options for the NIC card:

image

As well 2 options for the IDE driver:

 image

Apparently according to BuyVM’s wiki article, Virtio driver seems to be able to deliver better performance in disk I/O speed.

There is also a few controls on the ISO image to mount on the CD-ROM so that you are free to install any OS desired, which is a significant advantage of KVM, even if the required OS is not available, I am pretty sure BuyVM will be able to mount the ISO if you could give them an legitimate download location:

image

image

image 

Interestingly, and this is the first time I have seen among KVM providers, there is actually another virtual floopy drive which you can mount some drive image, although at the moment, there is no other option than the VIRTIO drivers for Windows:

image

The VPS comes with 16 IPv6 addresses and 1 IPv4 address and the reverse DNS set up is instant on the IP addresses page:

SNAGHTML31423d5

Test on the VPS – 128MB KVM

As mentioned above, the VPS that I have signed up for has 128MB of RAM, 15GB of storage space and is located in San Jose, California. I have installed Debian 6 32 bit (netinstall) for testing.

With a fresh install of the Debian, about 20MB of RAM is used:

free -m
             total       used       free     shared    buffers     cached
Mem:           121        102         19          0         19         63
-/+ buffers/cache:         20        101
Swap:          239          0        239

Note that the total RAM amount is 121MB instead of the 128MB given. I was told by another provider before that it is because the OS itself has used some memory.Furthermore, a swap partition was built automatically during the installation process.

The top output showing the processes running:

top - 10:51:53 up 4 days, 12:26,  1 user,  load average: 0.00, 0.00, 0.00
Tasks:  60 total,   1 running,  59 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:    124888k total,   113608k used,    11280k free,    19684k buffers
Swap:   245752k total,        0k used,   245752k free,    72500k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3136 root      20   0  8476 2948 2320 S  0.0  2.4   0:00.25 sshd
 3140 root      20   0  5648 2940 1452 S  0.0  2.4   0:00.20 bash
 1014 root      20   0 27448 1540 1024 S  0.0  1.2   0:00.16 rsyslogd
 3154 root      20   0  2332 1116  900 R  0.0  0.9   0:00.01 top
 1536 root      20   0  5496  984  592 S  0.0  0.8   0:00.01 sshd
 1330 Debian-e  20   0  6512  936  624 S  0.0  0.7   0:00.21 exim4
  376 root      16  -4  2392  836  420 S  0.0  0.7   0:00.06 udevd
  860 statd     20   0  1936  768  640 S  0.0  0.6   0:00.01 rpc.statd
 1115 root      20   0  3784  768  604 S  0.0  0.6   0:00.50 cron
  571 root      18  -2  2388  724  304 S  0.0  0.6   0:00.00 udevd
    1 root      20   0  2032  704  612 S  0.0  0.6   0:10.74 init
  570 root      18  -2  2388  628  212 S  0.0  0.5   0:00.00 udevd
  954 root      20   0  2332  612  344 S  0.0  0.5   0:00.02 dhclient
 1063 root      20   0  1704  588  480 S  0.0  0.5   0:00.01 acpid
 1566 root      20   0  1708  552  472 S  0.0  0.4   0:00.00 getty
 1349 root      20   0  1708  548  472 S  0.0  0.4   0:00.00 getty
 1350 root      20   0  1708  548  472 S  0.0  0.4   0:00.00 getty

And the htop output for the htop fans:

image

Less than 700MB of hard drive space was used after the fresh OS installation, although I did not choose any of the installation packages (set the sytem as FTP or file server and so on) during the installation, however I have to install the SSH daemon so that I can access the system using putty:

df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/
                       15G  654M   13G   5% /
tmpfs                  61M     0   61M   0% /lib/init/rw
udev                   57M  112K   57M   1% /dev
tmpfs                  61M     0   61M   0% /dev/shm
/dev/sda1             228M   15M  202M   7% /boot

As you can see, there are a lot more partitions compare to OpenVZ set up, which typically only have a single partition, or at most two partitions. Thanks to the LVM in Debian OS, I do not have to do any of these partitioning myself.

The inodes limit are set to pretty high:

 df -i
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/mapper/
                      952848   25270  927578    3% /
tmpfs                  15611       5   15606    1% /lib/init/rw
udev                   14394     561   13833    4% /dev
tmpfs                  15611       1   15610    1% /dev/shm
/dev/sda1             124496     222  124274    1% /boot

When the entire LNMP stack is installed, approximately 30MB of RAM was used:

 free -m
             total       used       free     shared    buffers     cached
Mem:           121        113          8          0          6         76
-/+ buffers/cache:         30         91
Swap:          239          4        235

Top output showing the stack running:

top - 01:19:11 up  1:14,  1 user,  load average: 0.00, 0.00, 0.14
Tasks:  70 total,   1 running,  69 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:    124888k total,   116816k used,     8072k free,     6452k buffers
Swap:   245752k total,     4868k used,   240884k free,    78508k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
31078 www       20   0 14504  10m  412 S  0.0  8.5   0:00.02 nginx
31068 root      20   0 24200 4564 1400 S  0.0  3.7   0:00.36 php-cgi
31069 www       20   0 24200 4172 1008 S  0.0  3.3   0:00.00 php-cgi
31070 www       20   0 24200 4172 1008 S  0.0  3.3   0:00.00 php-cgi
31071 www       20   0 24200 4172 1008 S  0.0  3.3   0:00.01 php-cgi
31072 www       20   0 24200 4172 1008 S  0.0  3.3   0:00.00 php-cgi
31073 www       20   0 24200 4172 1008 S  0.0  3.3   0:00.00 php-cgi
31104 root      20   0  6092 3388 1456 S  0.0  2.7   0:00.46 bash
31101 root      20   0  8468 2948 2324 S  0.0  2.4   0:00.35 sshd
11159 mysql     20   0 34808 1772  528 S  0.0  1.4   0:00.02 mysqld
31122 root      20   0  2332 1128  900 R  0.8  0.9   0:00.01 top
 1040 root      20   0 27448  960  744 S  0.0  0.8   0:00.07 rsyslogd
31076 root      20   0  4692  708  264 S  0.0  0.6   0:00.00 nginx
 1190 root      20   0  5496  540  436 S  0.0  0.4   0:00.00 sshd
    1 root      20   0  2032  520  492 S  0.0  0.4   0:01.07 init
 1366 Debian-e  20   0  6516  456  400 S  0.0  0.4   0:00.01 exim4
 1148 root      20   0  3784  448  400 S  0.0  0.4   0:00.05 cron

And the corresponding htop output:

image

Slightly more than 2GB of disk space is used:

 df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/root
                       15G  2.0G   12G  15% /
tmpfs                  61M     0   61M   0% /lib/init/rw
udev                   57M  112K   57M   1% /dev
tmpfs                  61M     0   61M   0% /dev/shm
/dev/sda1             228M   15M  202M   7% /boot

And approximately 10% of inodes are used:

 df -i
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/mapper/root
                      952848   74327  878521    8% /
tmpfs                  15611       5   15606    1% /lib/init/rw
udev                   14394     561   13833    4% /dev
tmpfs                  15611       1   15610    1% /dev/shm
/dev/sda1             124496     222  124274    1% /boot

Interestingly, the output of vmstat seems to indicate the VPS has been fairly busy:

vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0   4868   8172   6468  78592    2    3   215   369  229  306 36 14 45  5

However the uptime output, which ran right next to the output of vmstat was obtained, seems to be pretty normal:

 uptime
 01:41:35 up  1:36,  1 user,  load average: 0.00, 0.00, 0.00

For curiosity sake, I decided to run the test again, and this time, the numbers look much more better:

vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0   6204   3584  24136  52540    0    0     1     6    6   37  0  0 99  0

And the corresponding uptime output:

uptime
 10:38:17 up 8 days, 10:33,  1 user,  load average: 0.00, 0.00, 0.00

CPUinfo output shows only one CPU core is available, fortunately it is not throttled and the clock speed is 2.5GHz:

cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 13
model name      : QEMU Virtual CPU version (cpu64-rhel6)
stepping        : 3
cpu MHz         : 2499.862
cache size      : 4096 KB
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 4
wp              : yes
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm up pni cx16 hypervisor lahf_lm
bogomips        : 4999.72
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

And the output meminfo:

cat /proc/meminfo
MemTotal:         124888 kB
MemFree:           16892 kB
Buffers:           15112 kB
Cached:            53436 kB
SwapCached:         1348 kB
Active:            42700 kB
Inactive:          48944 kB
Active(anon):       9492 kB
Inactive(anon):    13616 kB
Active(file):      33208 kB
Inactive(file):    35328 kB
Unevictable:           0 kB
Mlocked:               0 kB
HighTotal:             0 kB
HighFree:              0 kB
LowTotal:         124888 kB
LowFree:           16892 kB
SwapTotal:        245752 kB
SwapFree:         240832 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         22004 kB
Mapped:             5008 kB
Shmem:                16 kB
Slab:              13212 kB
SReclaimable:       9880 kB
SUnreclaim:         3332 kB
KernelStack:         592 kB
PageTables:          756 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:      308196 kB
Committed_AS:     111380 kB
VmallocTotal:     897028 kB
VmallocUsed:        5620 kB
VmallocChunk:     882292 kB
HardwareCorrupted:     0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       4096 kB
DirectMap4k:       12276 kB
DirectMap4M:      118784 kB

As well as time sync:

time sync

real    0m0.045s
user    0m0.000s
sys     0m0.008s

The disk I/O is not three digits, which is something common to see in BuyVM’s OpenVZ products offering, however it is nontheless pretty high and better than many providers that I have reviewed before:

dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 11.3061 s, 95.0 MB/s

Test again showed a slightly worse but still decent result:

dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 11.9677 s, 89.7 MB/s

The ioping results are pretty stable as well, which is always a good sign:

ioping -c 10 .
4096 bytes from . (ext3 /dev/mapper/coco-root): request=1 time=0.4 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=2 time=0.6 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=3 time=0.5 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=4 time=0.6 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=5 time=0.5 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=6 time=0.6 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=7 time=0.6 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=8 time=0.3 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=9 time=0.5 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=10 time=0.6 ms

--- . (ext3 /dev/mapper/coco-root) ioping statistics ---
10 requests completed in 9009.2 ms, 1954 iops, 7.6 mb/s
min/avg/max/mdev = 0.3/0.5/0.6/0.1 ms

And test again:

 ioping -c 10 .
4096 bytes from . (ext3 /dev/mapper/coco-root): request=1 time=0.4 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=2 time=0.6 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=3 time=0.7 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=4 time=0.5 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=5 time=0.6 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=6 time=0.5 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=7 time=0.5 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=8 time=1.6 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=9 time=1.1 ms
4096 bytes from . (ext3 /dev/mapper/coco-root): request=10 time=0.5 ms

--- . (ext3 /dev/mapper/coco-root) ioping statistics ---
10 requests completed in 9009.5 ms, 1437 iops, 5.6 mb/s
min/avg/max/mdev = 0.4/0.7/1.6/0.3 ms

The download speed, compare to many other features of the VPS, seems to be less impressive, I was only able to clock about 6.6MB/s in the Cachefly download test:

wget cachefly.cachefly.net/100mb.test -O /dev/null
--2011-11-13 01:43:00--  http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net... 205.234.175.175
Connecting to cachefly.cachefly.net|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: a€?/dev/nulla€

100%[======================================>] 104,857,600 6.02M/s   in 15s

2011-11-13 01:43:15 (6.64 MB/s) - a€?/dev/nulla€

Upload tests appear to be pretty sluggish as well, probably due to the fact that the upload speed for this VPS is just 5Mbps:

First one is from BuffaloVPS in Chicago, IL:

wget 209.141.53.21/100mb.test -O /dev/null
--2011-11-20 22:12:00--  http://209.141.53.21/100mb.test
Connecting to 209.141.53.21:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[======================================>] 104,857,600 1.35M/s   in 68s

2011-11-20 22:13:08 (1.47 MB/s) - `/dev/null' saved [104857600/104857600]

Unfortunately I do not have any VPS along the west coast at the moment other than BuyVM, and I thought it won’t be fair for me to test the network speed using a BuyVM VPS on another BuyVM VPS, so the only one upload test I have done is with my Quickweb VPS in London, UK:

wget 209.141.53.21/100mb.test -O /dev/null
--2011-11-20 22:48:23--  http://209.141.53.21/100mb.test
Connecting to 209.141.53.21:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'

100%[======================================>] 104,857,600  690K/s   in 2m 50s

2011-11-20 22:51:14 (601 KB/s) - `/dev/null' saved [104857600/104857600]

Finally, the benchmark time. I was a little disappointed by the UnixBench, which is not bad by any means, however far from impressive, which I think may be because only one CPU core is available:

   #    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
   #    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
   #    #  # #  #  #    ##            #####   #####   # #  #  #       ######
   #    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
   #    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
    ####   #    #  #  #    #          #####   ######  #    #   ####   #    #

   Version 5.1.3                      Based on the Byte Magazine Unix Benchmark

   Multi-CPU version                  Version 5 revisions by Ian Smith,
                                      Sunnyvale, CA, USA
   January 13, 2011                   johantheghost at yahoo period com


1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

1 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

1 x Execl Throughput  1 2 3

1 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

1 x File Copy 256 bufsize 500 maxblocks  1 2 3

1 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

1 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

1 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

1 x Process Creation  1 2 3

1 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

1 x Shell Scripts (1 concurrent)  1 2 3

1 x Shell Scripts (8 concurrent)  1 2 3

========================================================================
   BYTE UNIX Benchmarks (Version 5.1.3)

   System: ******: GNU/Linux
   OS: GNU/Linux -- 2.6.32-5-686 -- #1 SMP Mon Oct 3 04:15:24 UTC 2011
   Machine: i686 (unknown)
   Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
   CPU 0: QEMU Virtual CPU version (cpu64-rhel6) (4999.7 bogomips)
          x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
   01:36:35 up 1 day,  1:31,  1 user,  load average: 0.09, 0.04, 0.01; runlevel 2

------------------------------------------------------------------------
Benchmark Run: Mon Nov 14 2011 01:36:35 - 02:04:41
1 CPU in system; running 1 parallel copy of tests

Dhrystone 2 using register variables       10999519.0 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     2192.0 MWIPS (10.0 s, 7 samples)
Execl Throughput                                853.1 lps   (29.8 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        509030.5 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks          177497.6 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks        663822.2 KBps  (30.0 s, 2 samples)
Pipe Throughput                             1247695.4 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                  93366.3 lps   (10.0 s, 7 samples)
Process Creation                               1701.5 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   1595.9 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                    205.1 lpm   (60.3 s, 2 samples)
System Call Overhead                        2006338.0 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0   10999519.0    942.5
Double-Precision Whetstone                       55.0       2192.0    398.6
Execl Throughput                                 43.0        853.1    198.4
File Copy 1024 bufsize 2000 maxblocks          3960.0     509030.5   1285.4
File Copy 256 bufsize 500 maxblocks            1655.0     177497.6   1072.5
File Copy 4096 bufsize 8000 maxblocks          5800.0     663822.2   1144.5
Pipe Throughput                               12440.0    1247695.4   1003.0
Pipe-based Context Switching                   4000.0      93366.3    233.4
Process Creation                                126.0       1701.5    135.0
Shell Scripts (1 concurrent)                     42.4       1595.9    376.4
Shell Scripts (8 concurrent)                      6.0        205.1    341.8
System Call Overhead                          15000.0    2006338.0   1337.6
                                                                   ========
System Benchmarks Index Score                                         541.8

Second run showed slightly worse results:

   #    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
   #    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
   #    #  # #  #  #    ##            #####   #####   # #  #  #       ######
   #    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
   #    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
    ####   #    #  #  #    #          #####   ######  #    #   ####   #    #

   Version 5.1.3                      Based on the Byte Magazine Unix Benchmark

   Multi-CPU version                  Version 5 revisions by Ian Smith,
                                      Sunnyvale, CA, USA
   January 13, 2011                   johantheghost at yahoo period com


1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

1 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

1 x Execl Throughput  1 2 3

1 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

1 x File Copy 256 bufsize 500 maxblocks  1 2 3

1 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

1 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

1 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

1 x Process Creation  1 2 3

1 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

1 x Shell Scripts (1 concurrent)  1 2 3

1 x Shell Scripts (8 concurrent)  1 2 3

========================================================================
   BYTE UNIX Benchmarks (Version 5.1.3)

   System: *****: GNU/Linux
   OS: GNU/Linux -- 2.6.32-5-686 -- #1 SMP Mon Oct 3 04:15:24 UTC 2011
   Machine: i686 (unknown)
   Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
   CPU 0: QEMU Virtual CPU version (cpu64-rhel6) (4999.7 bogomips)
          x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
   03:14:06 up 1 day,  3:09,  2 users,  load average: 0.00, 0.00, 0.00; runlevel 2

------------------------------------------------------------------------
Benchmark Run: Mon Nov 14 2011 03:14:06 - 03:42:10
1 CPU in system; running 1 parallel copy of tests

Dhrystone 2 using register variables       10705898.6 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     2035.6 MWIPS (10.3 s, 7 samples)
Execl Throughput                                799.6 lps   (29.5 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        465743.5 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks          173604.4 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks        757246.2 KBps  (30.0 s, 2 samples)
Pipe Throughput                             1202203.6 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                  91617.3 lps   (10.0 s, 7 samples)
Process Creation                               1522.5 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   1622.1 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                    200.7 lpm   (60.2 s, 2 samples)
System Call Overhead                        1984013.3 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0   10705898.6    917.4
Double-Precision Whetstone                       55.0       2035.6    370.1
Execl Throughput                                 43.0        799.6    186.0
File Copy 1024 bufsize 2000 maxblocks          3960.0     465743.5   1176.1
File Copy 256 bufsize 500 maxblocks            1655.0     173604.4   1049.0
File Copy 4096 bufsize 8000 maxblocks          5800.0     757246.2   1305.6
Pipe Throughput                               12440.0    1202203.6    966.4
Pipe-based Context Switching                   4000.0      91617.3    229.0
Process Creation                                126.0       1522.5    120.8
Shell Scripts (1 concurrent)                     42.4       1622.1    382.6
Shell Scripts (8 concurrent)                      6.0        200.7    334.5
System Call Overhead                          15000.0    1984013.3   1322.7
                                                                   ========
System Benchmarks Index Score                                         527.1

I have decided to give it yet another chance, and the result turned to be even worse:

   #    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
   #    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
   #    #  # #  #  #    ##            #####   #####   # #  #  #       ######
   #    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
   #    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
    ####   #    #  #  #    #          #####   ######  #    #   ####   #    #

   Version 5.1.3                      Based on the Byte Magazine Unix Benchmark

   Multi-CPU version                  Version 5 revisions by Ian Smith,
                                      Sunnyvale, CA, USA
   January 13, 2011                   johantheghost at yahoo period com


1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

1 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

1 x Execl Throughput  1 2 3

1 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

1 x File Copy 256 bufsize 500 maxblocks  1 2 3

1 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

1 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

1 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

1 x Process Creation  1 2 3

1 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

1 x Shell Scripts (1 concurrent)  1 2 3

1 x Shell Scripts (8 concurrent)  1 2 3

========================================================================
   BYTE UNIX Benchmarks (Version 5.1.3)

   System: *****: GNU/Linux
   OS: GNU/Linux -- 2.6.32-5-686 -- #1 SMP Mon Oct 3 04:15:24 UTC 2011
   Machine: i686 (unknown)
   Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
   CPU 0: QEMU Virtual CPU version (cpu64-rhel6) (4999.7 bogomips)
          x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
   04:28:06 up 1 day,  4:23,  2 users,  load average: 0.00, 0.00, 0.06; runlevel 2

------------------------------------------------------------------------
Benchmark Run: Mon Nov 14 2011 04:28:06 - 04:56:10
1 CPU in system; running 1 parallel copy of tests

Dhrystone 2 using register variables       10381211.9 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     1973.2 MWIPS (10.3 s, 7 samples)
Execl Throughput                                821.1 lps   (29.9 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        471626.8 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks          168266.0 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks        653099.1 KBps  (30.0 s, 2 samples)
Pipe Throughput                             1170591.1 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                  87087.7 lps   (10.0 s, 7 samples)
Process Creation                               1749.4 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   1574.4 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                    200.6 lpm   (60.3 s, 2 samples)
System Call Overhead                        1964213.9 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0   10381211.9    889.6
Double-Precision Whetstone                       55.0       1973.2    358.8
Execl Throughput                                 43.0        821.1    190.9
File Copy 1024 bufsize 2000 maxblocks          3960.0     471626.8   1191.0
File Copy 256 bufsize 500 maxblocks            1655.0     168266.0   1016.7
File Copy 4096 bufsize 8000 maxblocks          5800.0     653099.1   1126.0
Pipe Throughput                               12440.0    1170591.1    941.0
Pipe-based Context Switching                   4000.0      87087.7    217.7
Process Creation                                126.0       1749.4    138.8
Shell Scripts (1 concurrent)                     42.4       1574.4    371.3
Shell Scripts (8 concurrent)                      6.0        200.6    334.4
System Call Overhead                          15000.0    1964213.9   1309.5
                                                                   ========
System Benchmarks Index Score                                         519.3

The GeekBench results are pretty low as well:

System Information
  Platform:                  Linux x86 (32-bit)
  Compiler:                  GCC 4.1.2 20070925 (Red Hat 4.1.2-33)
  Operating System:          Linux 2.6.32-5-686 i686
  Model:                     Linux PC (QEMU Virtual CPU version (cpu64-rhel6))
  Motherboard:               Unknown Motherboard
  Processor:                 QEMU Virtual CPU version (cpu64-rhel6)
  Processor ID:              GenuineIntel Family 6 Model 13 Stepping 3
  Logical Processors:        1
  Physical Processors:       1
  Processor Frequency:       2.50 GHz
  L1 Instruction Cache:      64.0 KB
  L1 Data Cache:             64.0 KB
  L2 Cache:                  512 KB
  L3 Cache:                  0.00 B
  Bus Frequency:             0.00 Hz
  Memory:                    122 MB
  Memory Type:               N/A
  SIMD:                      1
  BIOS:                      N/A
  Processor Model:           QEMU Virtual CPU version (cpu64-rhel6)
  Processor Cores:           1

Integer
  Blowfish
    single-threaded scalar    1543 ||||||
    multi-threaded scalar     1878 |||||||
  Text Compress
    single-threaded scalar    1922 |||||||
    multi-threaded scalar     1831 |||||||
  Text Decompress
    single-threaded scalar    1698 ||||||
    multi-threaded scalar     1710 ||||||
  Image Compress
    single-threaded scalar    1663 ||||||
    multi-threaded scalar     1621 ||||||
  Image Decompress
    single-threaded scalar    1335 |||||
    multi-threaded scalar     1344 |||||
  Lua
    single-threaded scalar    3162 ||||||||||||
    multi-threaded scalar     3194 ||||||||||||

Floating Point
  Mandelbrot
    single-threaded scalar    1831 |||||||
    multi-threaded scalar     1855 |||||||
  Dot Product
    single-threaded scalar    3304 |||||||||||||
    multi-threaded scalar     3575 ||||||||||||||
    single-threaded vector    2488 |||||||||
    multi-threaded vector     2860 |||||||||||
  LU Decomposition
    single-threaded scalar    2117 ||||||||
    multi-threaded scalar     2119 ||||||||
  Primality Test
    single-threaded scalar    2647 ||||||||||
    multi-threaded scalar     2091 ||||||||
  Sharpen Image
    single-threaded scalar    5684 ||||||||||||||||||||||
    multi-threaded scalar     5648 ||||||||||||||||||||||
  Blur Image
    single-threaded scalar    4221 ||||||||||||||||
    multi-threaded scalar     4173 ||||||||||||||||

Memory
  Read Sequential
    single-threaded scalar    1620 ||||||
  Write Sequential
    single-threaded scalar    2426 |||||||||
  Stdlib Allocate
    single-threaded scalar    1881 |||||||
  Stdlib Write
    single-threaded scalar    1023 ||||
  Stdlib Copy
    single-threaded scalar     584 ||

Stream
  Stream Copy
    single-threaded scalar    1420 |||||
    single-threaded vector    1358 |||||
  Stream Scale
    single-threaded scalar     984 |||
    single-threaded vector    1321 |||||
  Stream Add
    single-threaded scalar    1377 |||||
    single-threaded vector    1247 ||||
  Stream Triad
    single-threaded scalar    1255 |||||
    single-threaded vector    1165 ||||

Integer Score:                1908 |||||||
Floating Point Score:         3186 ||||||||||||
Memory Score:                 1506 ||||||
Stream Score:                 1265 |||||

Overall Geekbench Score:      2210 ||||||||

Test again showed exactly the same overall score, although the section subscores are different:

System Information
  Platform:                  Linux x86 (32-bit)
  Compiler:                  GCC 4.1.2 20070925 (Red Hat 4.1.2-33)
  Operating System:          Linux 2.6.32-5-686 i686
  Model:                     Linux PC (QEMU Virtual CPU version (cpu64-rhel6))
  Motherboard:               Unknown Motherboard
  Processor:                 QEMU Virtual CPU version (cpu64-rhel6)
  Processor ID:              GenuineIntel Family 6 Model 13 Stepping 3
  Logical Processors:        1
  Physical Processors:       1
  Processor Frequency:       2.50 GHz
  L1 Instruction Cache:      64.0 KB
  L1 Data Cache:             64.0 KB
  L2 Cache:                  512 KB
  L3 Cache:                  0.00 B
  Bus Frequency:             0.00 Hz
  Memory:                    122 MB
  Memory Type:               N/A
  SIMD:                      1
  BIOS:                      N/A
  Processor Model:           QEMU Virtual CPU version (cpu64-rhel6)
  Processor Cores:           1

Integer
  Blowfish
    single-threaded scalar    1690 ||||||
    multi-threaded scalar     1832 |||||||
  Text Compress
    single-threaded scalar    1918 |||||||
    multi-threaded scalar     1744 ||||||
  Text Decompress
    single-threaded scalar    1730 ||||||
    multi-threaded scalar     1737 ||||||
  Image Compress
    single-threaded scalar    1448 |||||
    multi-threaded scalar     1616 ||||||
  Image Decompress
    single-threaded scalar    1326 |||||
    multi-threaded scalar     1366 |||||
  Lua
    single-threaded scalar    3213 ||||||||||||
    multi-threaded scalar     3183 ||||||||||||

Floating Point
  Mandelbrot
    single-threaded scalar    1803 |||||||
    multi-threaded scalar     1825 |||||||
  Dot Product
    single-threaded scalar    3317 |||||||||||||
    multi-threaded scalar     3538 ||||||||||||||
    single-threaded vector    2445 |||||||||
    multi-threaded vector     2883 |||||||||||
  LU Decomposition
    single-threaded scalar    1991 |||||||
    multi-threaded scalar     1570 ||||||
  Primality Test
    single-threaded scalar    2670 ||||||||||
    multi-threaded scalar     2161 ||||||||
  Sharpen Image
    single-threaded scalar    5483 |||||||||||||||||||||
    multi-threaded scalar     4881 |||||||||||||||||||
  Blur Image
    single-threaded scalar    4177 ||||||||||||||||
    multi-threaded scalar     4096 ||||||||||||||||

Memory
  Read Sequential
    single-threaded scalar    1815 |||||||
  Write Sequential
    single-threaded scalar    2795 |||||||||||
  Stdlib Allocate
    single-threaded scalar    1930 |||||||
  Stdlib Write
    single-threaded scalar     905 |||
  Stdlib Copy
    single-threaded scalar     527 ||

Stream
  Stream Copy
    single-threaded scalar    1365 |||||
    single-threaded vector    1752 |||||||
  Stream Scale
    single-threaded scalar    1753 |||||||
    single-threaded vector    1795 |||||||
  Stream Add
    single-threaded scalar    1402 |||||
    single-threaded vector    1655 ||||||
  Stream Triad
    single-threaded scalar    1518 ||||||
    single-threaded vector    1259 |||||

Integer Score:                1900 |||||||
Floating Point Score:         3060 ||||||||||||
Memory Score:                 1594 ||||||
Stream Score:                 1562 ||||||

Overall Geekbench Score:      2210 ||||||||

 

Conclusion

As you can see, although the disk I/O seems to be pretty good, the fact that there is only one CPU core available seems to cause the VPS to show pretty low benchmark score, the network speed could be better as well. However, for 25 USD per year, there is really little to complain about and this VPS is able to run pretty well as a basic web server or VPN server which does require a lot of CPU intensive tasks.

In the next post, we shall take a look at the performance of the 512MB VPS, and see if we will be getting better performance by paying more, as well as a discussion on BuyVM’s customer service.

10 thoughts on “96MB Low End VPS Review Part XXXVI – BuyVM KVM Series (Part 1 of 2)

  1. Just paid my third monthly invoice at BuyVM and loving the experience there! Usually I am a bit of a host hopper and move on from month to month however something they’re doing is keeping me there.

    Thanks for the review too 🙂

    • @ixape: you are most welcome! Let me guess…you choose them because of their control panel? Instant rDNS for both IPv4 and IPv6? Free internal traffic via LAN? Or something else? They do have a lot goodies in the services and I have to say that is something really great about them! 🙂

      • Don’t actually use IPV6 but it is an advantage – spare IPs in-case of a DoS attack to access the VM on.

        The free internal traffic via LAN is a very very nice plus combined with the 5GB free storage space. I upload files to the FTP space which isn’t restricted on bandwidth and then transfer it across to my VM over the local LAN.

  2. Your network speeds are because you’re using the RTL8139 NIC 🙂 If you swapped to either E1000 or Virtio it would work much better.

    Our panel, while looking like solus, is actually ‘stallion’ our own inhouse panel. We’re the only KVM provider that has full controls to swap ethernet & IDE drivers from the client side (everyone else has to use it on the admin end).

    Thanks for the review! 🙂

    Francisco

    • @Francisco: You are always welcome. Right, I forgot the part about Stallion, sorry about that 🙁
      I have just changed the NIC card to VIRTIO, shutdown and then boot the VPS from the control panel and here is what I got:
      With BuffaloVPS in Chicago, IL:

      wget 209.141.53.21/100mb.test -O /dev/null
      –2011-11-22 08:55:23– http://209.141.53.21/100mb.test
      Connecting to 209.141.53.21:80… connected.
      HTTP request sent, awaiting response… 200 OK
      Length: 104857600 (100M) [application/octet-stream]
      Saving to: `/dev/null’

      100%[======================================>] 104,857,600 1.38M/s in 77s

      2011-11-22 08:56:40 (1.30 MB/s) – `/dev/null’ saved [104857600/104857600]

      Quickweb VPS in London, UK:

      wget 209.141.53.21/100mb.test -O /dev/null
      –2011-11-22 08:58:07– http://209.141.53.21/100mb.test
      Connecting to 209.141.53.21:80… connected.
      HTTP request sent, awaiting response… 200 OK
      Length: 104857600 (100M) [application/octet-stream]
      Saving to: `/dev/null’

      100%[======================================>] 104,857,600 793K/s in 2m 43s

      2011-11-22 09:00:51 (627 KB/s) – `/dev/null’ saved [104857600/104857600]

      I am not sure if that is a significant improvement compare to the results I have obtained using the RTL card (1.47MB/s using BuffaloVPS and 601KB/s using Quickweb) 🙁

      • download is better though:

        wget cachefly.cachefly.net/100mb.test -O /dev/null
        –2011-11-22 21:53:59– http://cachefly.cachefly.net/100mb.test
        Resolving cachefly.cachefly.net… 205.234.175.175
        Connecting to cachefly.cachefly.net|205.234.175.175|:80… connected.
        HTTP request sent, awaiting response… 200 OK
        Length: 104857600 (100M) [application/octet-stream]
        Saving to: “/dev/nullâ€

        100%[======================================>] 104,857,600 8.87M/s in 11s

        2011-11-22 21:54:10 (9.17 MB/s) – “/dev/nullâ€

  3. You should still be more like 30 – 40MB/sec to cachefly on E1000 and like….80MB/sec on virtio_net :S Mind ticketing over it? It’s possible there’s a bigger issue at hand or you’re getting hit with some speed caps somewhere.

    I don’t remember asking for 128MB’s to be capped but it’s possible it’s happening!

    Francisco

    • @Francisco: I did another test just now and it looks a lot better:

      wget cachefly.cachefly.net/100mb.test -O /dev/null
      –2011-11-26 23:57:23– http://cachefly.cachefly.net/100mb.test
      Resolving cachefly.cachefly.net… 205.234.175.175
      Connecting to cachefly.cachefly.net|205.234.175.175|:80… connected.
      HTTP request sent, awaiting response… 200 OK
      Length: 104857600 (100M) [application/octet-stream]
      Saving to: â/dev/nullâ

      100%[======================================>] 104,857,600 32.8M/s in 3.1s

      2011-11-26 23:57:27 (32.8 MB/s) Р̢/dev/null̢

      I am using VIRTIO for NIC card at the moment, so although it is less than 80MB/sec, it is still pretty good….

  4. Pingback: 96MB Low End VPS Review Part XXXVII – BuyVM KVM Series (Part 2 of 2) | | 96MB.com96MB.com

  5. Pingback: BuyVM – $25/Year KVM VPS in Buffalo, NY – Low End Box

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.