TY - GEN
T1 - Achieving high throughput by transparent network interface virtualization on multi-core systems
AU - Zhang, Huiyong
AU - Bai, Yuebin
AU - Li, Zhi
AU - Du, Niandong
AU - Yang, Wentao
PY - 2010
Y1 - 2010
N2 - Though with the rapid development, there remains a challenge on achieving high performance of I/O virtualization. The paravirtualized I/O driver domain model, used in Xen, provides several advantages including fault isolation, live migration, and hardware independence. However, the high CPU overhead of driver domain leads to low throughput for high bandwidth links. Direct I/O can achieve high performance but at the cost of removing the benefits of the driver domain model. This paper presents software techniques and optimizations to achieve high throughput network I/O virtualization by driver domain virtualization model on multi-core systems. In our experiments on multi-core system with a quad-port 1GbE NIC, we observe the overall throughput of multiple guest VMs can only be 2.2Gb/s, while the link bandwidth is 4Gb/s in total. The low performance results from the disability of driver domain to concurrently serve multiple guest VMs running bandwidth-intensive applications. Consequently, two approaches are proposed. First, a multi-tasklet netback is implemented to serve multiple netfronts concurrently. Second, we implement a new event channel dispatch mechanism to balance event associated with network I/O over VCPUs of driver domain. To reduce the CPU overhead of the driver domain model, we also propose two optimizations: lower down event frequency in netback and implement LRO in netfront. By applying all the above techniques, our experiments show that the overall throughput can be improved from the original 2.2Gb/s to 3.7Gb/s and the multi-core CPU resources can be utilized efficiently. We believe that the approaches of our study can be valuable for high throughput I/O virtualization in the coming multi-core era.
AB - Though with the rapid development, there remains a challenge on achieving high performance of I/O virtualization. The paravirtualized I/O driver domain model, used in Xen, provides several advantages including fault isolation, live migration, and hardware independence. However, the high CPU overhead of driver domain leads to low throughput for high bandwidth links. Direct I/O can achieve high performance but at the cost of removing the benefits of the driver domain model. This paper presents software techniques and optimizations to achieve high throughput network I/O virtualization by driver domain virtualization model on multi-core systems. In our experiments on multi-core system with a quad-port 1GbE NIC, we observe the overall throughput of multiple guest VMs can only be 2.2Gb/s, while the link bandwidth is 4Gb/s in total. The low performance results from the disability of driver domain to concurrently serve multiple guest VMs running bandwidth-intensive applications. Consequently, two approaches are proposed. First, a multi-tasklet netback is implemented to serve multiple netfronts concurrently. Second, we implement a new event channel dispatch mechanism to balance event associated with network I/O over VCPUs of driver domain. To reduce the CPU overhead of the driver domain model, we also propose two optimizations: lower down event frequency in netback and implement LRO in netfront. By applying all the above techniques, our experiments show that the overall throughput can be improved from the original 2.2Gb/s to 3.7Gb/s and the multi-core CPU resources can be utilized efficiently. We believe that the approaches of our study can be valuable for high throughput I/O virtualization in the coming multi-core era.
KW - Component
KW - Expriment
KW - I/O
KW - Network
KW - Performance
KW - Xen
UR - https://www.scopus.com/pages/publications/79952414799
U2 - 10.1109/CloudCom.2010.62
DO - 10.1109/CloudCom.2010.62
M3 - 会议稿件
AN - SCOPUS:79952414799
SN - 9780769543024
T3 - Proceedings - 2nd IEEE International Conference on Cloud Computing Technology and Science, CloudCom 2010
SP - 250
EP - 257
BT - Proceedings - 2nd IEEE International Conference on Cloud Computing Technology and Science, CloudCom 2010
PB - IEEE Computer Society
T2 - 2nd IEEE International Conference on Cloud Computing Technology and Science, CloudCom 2010
Y2 - 30 November 2010 through 3 December 2010
ER -