Showing posts with label XEN. Show all posts
Showing posts with label XEN. Show all posts

Thursday, May 27, 2010

KVM in SUSE Linux Enterprise Server 11 SP 1

At Novell BrainShare Amsterdam last week, the Geekos were handing out SUSE Linux Enterprise Server 11 SP1 (RC4) at IT Central. So off came my DVD drive and I installed it on a separate partition on my Thinkpad.

Since its a Release Candidate (RC4) and not the final GA version due on 2nd June, I wanted to have a quick look at how KVM works since its an officially supported hypervisor in SP1 (other than the very established Xen shipped and supported since SLES 10 back in 2006).

I've been using Xen and helping partners & customers implement it for a few years, without any knowledge of KVM, I wanted to dive right in and see how far I'd go. I am very impressed with the engineering and thought put into how KVM is packaged with SLES 11 SP1. This is truly a fine example of an "Enterprise" grade product. I was able to virtualize an instance of SLES 11 SP1 and MS Windows 7 with no assistance. The KEY is the tools (libvirt & virt-manager) for Xen, which I'm familiar since SLES 10, is now capable of managing both Xen and KVM in SLES 11 SP1. Depending if you boot into the Xen kernel or the default kernel with KVM module loaded, these same tools can be used, thus providing a consistent user interface. This is just brilliant! senyum

For existing users of Xen, should they decide that KVM is a better alternative as it matures further down the road, they could easily switch without much hassle. More importantly, its conceivable that any custom scripts (automation, failover or whatever) developed for Xen can be easily applied to KVM. This is much better than RIP-n-REPLACE in so many other "Enterprise" grade products.

Here are a few screenshots:



Cheers! senyum

Monday, July 20, 2009

Converting VMWare image to SLES Xen

UPDATE (22 Jul '09): Ron Terry has uploaded his virt-tools RPM package online at http://download.opensuse.org/repositories/home:/roncterry/ . Hence, with regards to point 4 below in this entry, you could simply download and install the virt-tools-0.1.0-4.1.noarch.rpm and use the sparsify-disk script. Thanks Ron! sembah

I had to conduct a demo of a product that only works on Windows. I could spend time to setup a Windows 2003 environment in Xen and then install the product/s and perform the necessary demo configurations etc... but my colleague have it all setup and ready to rock in a VMWare image... heheheh... let the fun begin! kenyit

PS: Yes, I could simple reboot SLES 11 and use VMWare Server but I had all my other demos in Xen... besides, I really dig the fact that I can have virtualization capabilities using open source software.

1) Using the qemu-img-xen tool in SLES 11, I was able to convert the VMWare VMDK file (single file) into a raw sparse file. If your VMDK is split into multiple files (2GB each - gotta love FAT), you can use vdiskmanager (a VMWare tool) to combined all these little files into a file before using qemu-img-xen.

qemu-img-xen convert demo.vmdk -O raw disk0

That's all there is to it... EASY!... oh... unless of course, you have my luck and the original Windows VM is virtualized on VMWare using SCSI disk drivers. You can take a look at the VMX file to verify. Or you could attempt to boot up the Windows VM in Xen and have it hang with the all so familiar BSOD (Blue Screen of Death). sengihnampakgigi

2) With many Thanks to Mr Ian Blenke and his blog entry, I managed to overcome the "Windows VM using SCSI disk drivers and will not boot up in Xen because the disk is now IDE" challenge. The trick is to simply copy the appropriate IDE drivers and merge additional registry entries into the Windows VM. Next, perform step 1 and that's it! senyum

For Windows 2003, the IDE drivers can be found in C:\WINDOWS\Driver Cache\i386\driver.cab. I extracted pciide.sys and copied it to C:\WINDOWS\system32\drivers\ directory. Additionally, check and ensure that Atapi.sys, Intelide.sys and Pciidex.sys are also in C:\WINDOWS\system32\drivers\

The additional registry entries and instructions on merging with existing registry can be found at the latter half of the very long page at http://support.microsoft.com/kb/314082/

3) Optionally, probably a good idea, I downloaded the latest Virtual Machine Driver Pack for Windows from http://download.novell.com (search term "virtual machine driver pack"). Direct link for this at time of this writing is http://download.novell.com/Download?buildid=vscGA_iLH5k~

Download Windows driver directly into the Windows Xen VM and double-click to install. Done! senyum

I did notice an improvement in overall speed during boot (I/O bound) and when copying files over the network (Network I/O).

4) Finally, I did some spring cleaning within the Windows Xen VM... managed to reclaim 6Gb of hard disk space. Now comes the next labourious bit, I need to re-sparsify the disk so as to translate this disk usage savings onto the physical hard disk drive. With much Thanks to Ron Terry, I managed to do just that by a wonderful script he provided.

Since I did not get his kind permission to put that script up in the public domain, I will describe the process (and commands) in sparsifying the disk.

a) Mount the Xen disk on dom0:

xm block-attach 0 file:/directory/disk0 xvde w
mount /dev/xvde1 /tmp/disk1

b) Use the dd command to fill up the rest of the empty spaces with zeros.

dd if=/dev/zero of=/tmp/disk1/zerofile bs=1M
rm /tmp/disk1/zerofile

c) Unmount and detach the Xen disk from dom0:

umount /tmp/disk1
xm block-detach 0 51776 -f

d) Make a copy of the Xen disk with the cp command and --sparse=always flag:

mv disk disk.tmp
cp -a --sparse=always disk.tmp disk

PS: Please ensure you have 1.5 to 2.0 times the disk image file (disk0) of free hard disk space on your system.

Life is good again. peace

Friday, May 8, 2009

Windows 7 RC virtualized on Xen

Yep, downloaded a copy of Windows 7 RC and a corresponding product key. Installed and now fully virtualized under SLES 11 Xen... no hacks, no tricks, it just worked... the way I like it. Yeah! senyum



The installation experience for Windows 7 RC is smooth and painless coz it made all my choices for me... how sweet?! siul All I had to do was to tell it to do a fresh install and choose the target disk! hah Of course, post-installation requires me to create a user account and password followed by activating the product with the product key. No fuss, no stress installation for an end user desktop.

The installation experience for SUSE Linux Enterprise Desktop 11 is pretty close too in terms of ease and yet gives me more options at install time to configure the system the way I want it... I welcome the arrival of Windows 7, as iron sharpens iron, it just gets better (progress) as opposed to a monopoly and innovation suffers. senyum

Tuesday, May 5, 2009

Xen Bridged networking with NIC Bonding on SLES

We have a customer that uses the SLES Xen hypervisor (SLES 10 SP2 and now SLES 11) on their test and development server. The idea is simple, virtualize all their test and development servers on a single 2-way QUAD core, 32 GB RAM and 1TB RAID-5 SATA drive. This way, they save money on the number of physical machines in their test/development environment, improve utilization (not all projects runs concurrently). Their production systems are still running on bare-metal without virtualization... for now. senyum

They've been having some networking issues (ie packet drop in PING test) and suspected its either the NIC (Broadcom 4 port Gb NIC and using NIC bonding) hardware or the netorking setup needs tweaking. Here's a journal of what I discovered and resolved (to a certain extent - hardware not in my scope):

1. Boot up SLES 11 (x86_64) with the default kernel (non-Xen) and configure the basic networking. Created a bond0 device that bonds 2, out of the 4, physical ports for NIC bonding (static IP, DNS and Routing info provided by customer). Tested the configuration and it works, we can ping other servers and development desktops can ping the server. Opened port in SuSEFirewall and SSH session works. Used yast2 lan for configuring and verified (always a good thing) via the following files /etc/hosts, /etc/resolv.conf, /etc/sysconfig/network/ifcfg-bond0

The following steps are done with reference to this Novell support document "Hassle-free Xen Networking" at this link.

2. Restart SLES 11 with the Xen kernel. Verify the following entry is commented out in /etc/xen/xend-config.sxp

## (network-script network-bridge)

and instead have the following:

(network-script )

Restart Xen via rcxend restart.

3. Created a bridge interface called br0 and bridged it to bond0. Moved all the static IP settings from bond0 to br0. In the place of the now empty bond0 (zero IP settings), I placed static IP 0.0.0.0 and Subnet Mask /32 (or 255.255.255.255). Verify that br0 is the interface with the static IP while bond0 does not. Also verify all the network pings between the servers and development desktop. senyum

You can see this in more detail as referenced earlier at this link.

4. Almost there! Next, I just needed to verify and update virtual NIC settings in all domUs (VMs). By inspecting each VM configuration file in /etc/xen/vm/, we need to amend the vif variable/s (see example below):

from: vif=[ 'mac=00:16:3e:24:96:38', ]
to: vif=[ 'mac=00:16:3e:24:96:38,bridge=br0', ]

Don't forget to refresh the configuration via xm delete [VM] and xm new [VM] where [VM] is the name of the domU (VM).

Done! Customer can ping the host (SLES 11) and their domUs (SLES, RHEL and Windows) successfully without any packet drops. senyum

Tuesday, April 7, 2009

nVidia 3D desktop effects with Xen on SLED 11


YES! I did it! I've moved all my machines from SUSE Linux Enterprise 10 SP2 to SUSE Linux Enterprise 11. senyum As you can see from the screenshot above, 3D effects (Desktop Cube) with Windows XP running as a virtual guest on Xen on the right.

Those who knows me understands that this is not just about popping in a DVD and installing a vanilla OS. To be truly productive in my environment, I need to be sure (ie test, test, test) this new SLE 11 can meet and/or exceed my expectations on function/feature and interoperability with my essential applications.

So here is my first entry on how I customized my SUSE Linux Enterprise Desktop 11. I need desktop applications and eye-candy (3D effects) as well as the Xen virtualization engine. On my Thinkpad T61p, I have an nVidia Quadro FX570M. I want to be able to boot into a Xen kernel and still have the wonderful 3D desktop effects.

Pre-Reqs: Ensure you have installed the C/C++ Compiler and Tools installation pattern and the kernel-source package.

1) Download the latest nVidia Linux driver. At the time of this writing, its version 180.44. To find out which is the latest driver, check out this sticky thread on nvnew.net. Alternatively, you can go directly to nvidia.com, provide information on your card and have it re-direct you to the right driver via this link.

Note 1: Since this is a proprietary driver, obviously it would not be packaged with the default SLED 11. Nonetheless, upon vanilla install, the open source nv driver did a good job and was able to display high-res (1920x1200). Unfortunately, I did not have as much luck enabling the 3D desktop effects.

Note 2: The steps below is considered a manual/hard way of installing nVidia drivers. The advantage is you can use the latest (greatest?) driver from nVidia immediately. However, the down-side is you will need to recompile this driver every time you upgrade your Linux kernel (all other updates are ok). For an easier way, also known as the repository way where we use the nVidia Online Repository via YaST, to download and automatically install pre-compiled drivers. Please refer to these online docs [here and here] for a general idea between these 2 install options.


2) With SLED 11 (default, non-Xen kernel) running, switch to TTY1 (Ctrl-Alt-F1). Login as root and switch to runlevel 3 via the command <init 3>. Execute the NVIDIA driver you have downloaded. You might need to make that driver executable first via <chmod +x NVIDIA-Linux-x86_64-180.44-pkg2.run > and execute it < ./NVIDIA-Linux-x86_64-180.44-pkg2.run >

The driver installation program is rather straight-forward (most of the time just hitting Enter and choosing Yes). The program will inform you it cannot find any pre-compiled driver, just hit Enter and it will go ahead and compile the driver for SLED 11. If you are running a 64-bit SLED 11, it will offer to install 32-bit compatible "stuff", just accept it with Yes. Finally, it will offer to edit your configuration files, accept it with Yes. Easy. kenyit

Finally, edit the /etc/X11/xorg.conf file and change "Off" to "On" in the following line:

Section "Extensions"
Option "Composite" "on"
EndSection


3) Test that the nVidia driver works by going back to runlevel 5 via the command <init 5>. You should get full GUI. Log in and enable the 3D desktop effects Computer -> Control Center -> Look & Feel -> Desktop Effect. Check the Enable desktop effects checkbox.

Tip: After clicking on the checkbox, a dialog box asking if you want to keep the new settings appear. WAIT! DO NOT click Yes for another 3-5 seconds. This allows time for the 3D engine to initialize and should anything go wrong, it will revert back to 2D in 30 seconds automatically. Only when you see 3D effects (ie try moving the opened windows around), then click Yes on the dialog box to keep these settings.


4) As root, copy the compiled nvidia driver to another location and rename it for safe keeping. This is important because you will need to run the nVidia installer program again against the Xen kernel. This process will over-ride and remove this previously compiled driver for the default kernel.

As root and in your root directory (/root), execute the following command:
< cp /lib/modules/2.6.27.19-5-default/kernel/drivers/video/nvidia.ko ./nvidia.ko.default >


5) Reboot SLED 11 into the Xen kernel. You should fail to enter runlevel 5 (Full GUI) as the required nVidia driver for Xen has not been compiled yet. Log in as root on TTY1 and Re-run the nVidia installation program but do it via this "magic" line:

< IGNORE_XEN_PRESENCE=y CC="gcc -DNV_VMAP_4_PRESENT -DNV_SIGNAL_STRUCT_RLIM" ./NVIDIA-Linux-x86_64-180.44-pkg2.run >

The nVidia driver installation program will run again (see step 2) and this time, it will compile the driver against the Xen kernel. Highly recommend you take a copy of the newly compile nVidia driver:

< cp /lib/modules/2.6.27.19-5-xen/kernel/drivers/video/nvidia.ko ./nvidia.ko.xen >

Once successfully completed, go into runlevel 5 and voila!

6) To ensure that you can have full desktop GUI and 3D effects on both SLED 11 Default and Xen kernel, verify and copy the correct <nvidia.ko> files previously backed up (nvidia.ko.default and nvidia.ko.xen) respectively to:

(nvidia.ko.default) -> /lib/modules/2.6.27.19-5-default/kernel/drivers/video/nvidia.ko
(nvidia.ko.xen) -> /lib/modules/2.6.27.19-5-xen/kernel/drivers/video/nvidia.ko

Done and done! senyum

Wednesday, January 7, 2009

Field Notes: Installing Virtual Machine Driver Pack for RHEL 5

The SUSE Linux Enterprise Virtual Machine Driver Pack (or VMDP) is a set of drivers that enhances the disk and network I/O of a fully virtualized (FV) virtual machine.

I was installing VMDP on SLES 10 SP2 this week at a customer site. They have a fully-virtualized (FV) RHEL 5.2 (i386) domU. The documentation on how to install this is fairly straight forward and can be found at this LINK.

Here are some additional stuff I did to complete the task:

1. Mounting the virtual CD-ROM

I could use virt-manager (GUI) to mount the 2Mb ISO file for RHEL 5 32 bit from the /opt/novell/ directories. However, from within the RHEL domU, the CD-ROM icon appears on the desktop but double-clicking it launches the CD/DVD Creator software!!?? It appears that RHEL thinks the mounted virtual CD-ROM is blank. jelir I had to use the terminal to figure out, with fdisk -l, that the mounted virtual CD-ROM is /dev/hdc. I then manually mounted this at /mnt/cdrom using the mount -t iso9660 -t ro,loop /dev/hdc /mnt/cdrom.

2. Installing the 2 rpms fails because the default RHEL install does not have the rpm-build package

Yep, installing the 2 VMDP RPMs will fail with the error "cannot create /usr/src/redhat/SOURCES". It appears that the rpm-build packages are not installed with a default install of RHEL. Easily resolved by mounting the RHEL 5.2 DVD and installed the rpm-build package. senyum

Done.

Friday, January 2, 2009

nVidia 3D desktop effects with XEN on SLED 10 SP2

Here's my first post for the new year 2009...

Having 3D desktop effects (ie Compiz) on SUSE Linux Enterprise Desktop (SLED) 10 is not a new thing since its been available for a few years. However, if you are using Xen virtualization with SLED, you would have realized that the Xen kernel don't play nice with these fancy graphics. kenyit

About a year ago, some brilliant and kind soul published how he got his Thinkpad T61p with nVidia graphics card to work with Xen and also enabled the 3D desktop effects. An awesome article (link here).

However, the technique used required a specific patch to be applied to a specific nVidia driver. xpasti About a month or so ago, I came across another entry online that talks about using the latest (and greatest) nVidia driver with Xen ... WITHOUT patching the nVidia driver! senyum

Here are the summarized steps (link and credit to muchologo's original entry on nvnews forum):

Pre-reqs: Ensure you have installed kernel-source and C/C++ tools.
  1. Download the desired nVidia driver. Link to good nVidia driver info.
  2. Install the nVidia driver per instructions on a non-Xen SLED 10 and check that 3D desktop effects works. Link to custom install of nVidia driver on SUSE.
  3. Reboot into the Xen SLED 10. Basic desktop GUI should fail to work (that's normal).
  4. Prepare the kernel:
    • cd /usr/src/linux
    • cp arch/x86_64/defconfig.xen .config
    • If you are using 32bit kernel, change x86_64 to i386
    • make oldconfig && make scripts && make prepare
  5. At the default TTY1, login as root, expand the nVidia driver package via:
    • ./NVIDIA-Linux-x86_64-177.82-pkg2.run --extract-only
    • You could be using a newer driver/arch, the above is just a sample using version 177.82 and x86_64 arch.
  6. Here's the magic:
    • IGNORE_XEN_PRESENCE=y CC="gcc -DNV_VMAP_4_PRESENT -DNV_SIGNAL_STRUCT_RLIM" make SYSSRC=/usr/src/linux module
    • This will compile a new nvidia.ko module that will work with Xen
  7. Now, replace the newly compiled nvidia.ko file and restart the nvidia module as follows:
    • cp nvidia.ko /lib/modules/-xen/kernel/drivers/video/
    • cd /lib/modules/-xen/kernel/drivers/video/
    • depmod -a
    • modprobe nvidia
    • startx
  8. All the steps only needs to be performed once for each new driver or kernel update.
Have fun! sengihnampakgigi

Friday, October 10, 2008

XEN/ZOS training for partners - Oct 2008

Finally have the opportunity to work with Jo De Baer on a 4.5 day XEN/ZOS training workshop for our Singapore and Malaysia business partners this week.

Wishing them all the best for the ZOS certification exam. peace


PS: ZOS is short for ZENworks Orchestrator senyum

Thursday, October 9, 2008

XEN Live Migration with iSCSI & OCFS2

Its been slow in coming but I've previously promised to blog about setting up iSCSI / OCFS2 as a means to share XEN domU images so as to facilitate Live Migration. Here we go... senyum

Background:
Live migration in XEN is the moving of a running virtual machine (domU) from one physical machine to another. XEN will replicate the running domU's RAM across the wire between the source and target physical machine (both running XEN and having the right configurations, of course). It is up to us to ensure that the target machine has read/write access to the domU's virtual disks.

Hence, the most common way to do this is via NFS. For most, especially when setting up a demo system or in a class setting, this is sufficient. For larger scale deployments, customers usually have a SAN environment and that's that.

What about folks who are between the demo/class and larger-scale deployment scenarios? This is where this iSCSI / OCFS2 solution fits nicely. This is also commonly known as the poor man's SAN. There are quite a number of articles online that discuss this in detail (like pointing out the various SPOFs) so I'll not bother to blog about it here. Here's a good link for further reading though -> Build your own iSCSI SAN appliance, save money.

Solution:

SUSE Linux Enterprise Server 10 has both iSCSI and OCFS2 support. The following is based on SLES 10 SP2. iSCSI is a means to present a remote disk via TCP/IP to a local server just like its a local SCSI device. Hence, the little i stands for internet. OCFS2 is a proven cluster-aware filesystem and its the recommended filesystem when deploying Oracle RAC on Linux.

Here are the steps in general, you can find the details via online SLES documentation (or a one time 9Mb PDF download).

Link to documentation -> http://www.novell.com/documentation/sles10/index.html
Link to the specific section (12 & 14) -> http://www.novell.com/documentation/sles10/sles_admin/data/part_administration.html

Step 1: Setting aside a disk (or partition) to be shared via iSCSI

With reference to section 12.1, you will need to identity a disk or a disk partition to be used. Next, YaST -> Misc -> iSCSI Target to configure (if iSCSI is not installed, it will do so at this time).

Step 2: Configuring XEN servers as iSCSI clients to the shared disk

With reference to section 12.2, we configure the XEN servers to connect using YaST -> Network Services -> iSCSI Initiator so that the shared disk (or partition) will appear as local.

Step 3: Installing and Configuring OCFS2 on all servers

If you are not familiar with OCFS2, its recommended that you read section 14.0 through to 14.4.

From section 14.5, we install OCFS2 packages on all servers (storage and XEN servers). In section 14.6, we configure OCFS2 and format the disk (or partition) on the storage server.

In section 14.7, we configure OCFS2 to mount the shared storage on a common directory on the XEN servers (ie /mnt/xen/).

Step 5: Edit the domU configuration file

Copy the XEN domU configuration and virtual disk (eg disk0) into the mounted /mnt/xen/vm and /mnt/xen/images subdirectories respectively.

You will need to edit the domU VM configuration file to ensure that the disk parameter is not pointing to the new virtual disk (eg disk0) location in /mnt/xen/images/.

Don't forget to propagate these changes via xm delete and xm new commands.

Step 6: Live Migrate!

Live migrate the running domU from one physical XEN server to another via:

xm migrate [domU ID/Name] [target server hostname/IP] --live


Good luck. senyum peace

Friday, July 11, 2008

Xen in Thailand

Had the pleasure of speaking and teaching a SLES XEN Virtualization workshop at the IBM System x Technical University in Bangkok on Thursday(10/7) and Friday(11/7).

It was raining outside and a traffic jam was building up... so glad to be indoors. kenyit

The class has about 20 notebooks from Dell, Compaq and Lenovo.

My setup with the Geeko skinned Thinkpad T61p and a whole lot of Geeko souvenirs and FREE SLES/SLED 10 SP1 (both 32 & 64 bit) DVDs to give away. Feels good giving away software and knowing that I'm not a pirate. Gotta love it. menari

Acknowledgement & Thanks to Ross Brunson and his team for creating the baseline enablement materials. I customized it a little to include Platespin into the presentation and did live demos of Win2003 & Win2008 virtualized on SUSE Linux Enterprise Server 10 SP2 with ZENworks Orchestrator and a little sneak peak at Platespin PowerRecon and PowerConvert. Boy... that was a mouthful. senyum

Class photos for Day 1 and 2.


Overall, a good and fruitful trip. senyum

Wednesday, June 18, 2008

Simple NAT setup with Xen

When using Xen in SUSE Linux Enterprise Server 10, the default network configuration is BRIDGE networking. Every virtual OS (domU) will have a unique IP in the same range as that of the physical network card. For example, if the physical network card (eth0) have an IP of 192.168.0.10, each domU will have an IP address in the range of 192.168.0.X. This means that the host OS (dom0) AND, more importantly, other machines on the the network will be able to ping each domU as it appears to be another machine on the network with an IP.

What if you want to setup a private network for a set of virtual machines running on dom0? What if you want this private network to still be able to access the WWW (when available) ?

I found myself in this predicament as I have a set of virtual machines (Windows, SLES, SLED etc) running on my Thinkpad T61p. The nature of my [mobile] work is such that there isn't always a LAN cable or a wifi connection available readily. Irregardless of my network environment, I need to perform testing and demonstration of software running on these virtual machines. Thus, I need a flexible setup with private networking for my domUs with Network Address Translation (NAT) for accessing the WWW when a physical LAN or Wifi becomes available.

Thanks to Till and Kai, my new German connections peace, the following is how I did it on SLED 10 SP2* ...

* - Note that official production support for Xen is for SLES only. I'm using SLED as a development & testing desktop and the following steps will work on SLES as well.

Attention: Linux commands in braces [ ] are executed as root

1) Stop Xen daemon with [ rcxend stop ]

2) Remove the default bridge networking by editing the config file /etc/xen/xend-config.sxp. Look for the following 2 lines and comment them out with hashes ##:
(network-script network-bridge)
(vif-script vif-bridge)
becomes
##(network-script network-bridge)
##(vif-script vif-bridge)

3) Setting up the bridge to physical network (eg eth0)

Create a network bridge br0 to the physical network device (ie eth0) by creating the file /etc/sysconfig/network/ifcfg-br0 with the following contents:
BRIDGE='yes'
STARTMODE='onboot'
BRIDGE_PORTS='eth0'
BOOTPROTO='dhcp'
BROADCAST=''

For br0 to work, it has to have the IP address. Therefore, change the physical network device (ie eth0) to not start the DHCP client routine by editing the /etc/sysconfig/network/ifcfg-eth-id-xxx file with the following setting:
BOOTPROTO='none'

4) Setting up a private network (eg. 192.168.1.x)

Create a private network bridge br1 by creating the file /etc/sysconfig/network/ifcfg-br1 with the following contents:
BRIDGE='yes'
STARTMODE='onboot'
IPADDR='192.168.1.1'
NETMASK='255.255.255.0'

5) Changing the firewall settings for br0 and br1.

Change the firewall to allow network traffic for br0 and br1 as an external and internal network device respectively. Edit the file /etc/sysconfig/SuSEfirewall2 and change the following settings as shown below:
FW_DEV_EXT="br0"
FW_DEV_INT="br1"
FW_ROUTE="yes"
FW_MASQUERADE="yes"

6) Restart networking and start Xen daemon

Execute the following in order:
[ SuSEconfig ]
[ rcnetwork restart ]
[ rcxend start ]

7) Edit each domUs config in /etc/xen/vm directory to include ,bridge=br1 as follows:
From:
vif=[ 'mac=00:16:3e:75:06:c3,model=rtl8139,type=ioemu', ]
To:
vif=[ 'mac=00:16:3e:75:06:c3,model=rtl8139,type=ioemu,bridge=br1', ]

Refresh this change for each domU via:
[ xm delete domUName ]
[ xm new domUName ]

8) Boot up your virtual machines (domUs) and setup IP address in the range of 192.168.1.xxx. Remember to set the default gateway to 192.168.1.1.

For Windows VMs: Control Panel, Network Connections, from network device, right-click properties, double-click Internet Protocol (TCP/IP), set a unique fixed IP within the range of 192.168.1.[2-254], subnet mask to 255.255.255.0 and your Default gateway to 192.168.1.1. Click OK and OK again to affect the change.

For SLES VMs (Linux): setting IP to 192.168.1.10 in example below:
[ ip addr add 192.168.1.10/24 dev eth0 ]
[ ip link set eth0 up ]
[ ip route add default via 192.168.1.1 ]

You should now be able to ping all your virtual machines (domUs) from dom0 and vice versa.

Have fun! sengihnampakgigi