Xen config file format




















This enables the other VNC-related settings. Default is 1 enabled. The actual display used can be accessed with xl vncviewer. Specifies the password for the VNC server. If the password is set to an empty string, authentication on the VNC server will be disabled, allowing any user to connect. The default is 0 not enabled. Specifies the path to the X authority file that should be used to connect to the X server when the sdl option is used. The default is 0 disabled.

Configure the keymap to use for the keyboard associated with this display. If the input method does not easily support raw keycodes e. The specific values which are accepted are defined by the version of the device-model which you are using. See Keymaps below or consult the qemu 1 manpage. The default is en-us. Specifies the virtual channels to be provided to the guest. A channel is a low-bandwidth, bidirectional byte stream, which resembles a serial link.

Typical uses for channels include transmitting VM configuration after boot and signalling to in-guest agents. Please see xen-pv-channel 7 for more details. Defined values are:.

This parameter is optional. If this parameter is omitted then the toolstack domain will be assumed. Specifies the name for this device. This parameter is mandatory! This should be a well-known name for a specific application e. There is no formal registry of channel names, so application authors are encouraged to make their names unique by including the domain name and a version number in the string e.

The backend will proxy data between the channel and the connected socket. The backend will create a pty and proxy data between the channel and the master device. The command xl channel-list can be used to discover the assigned slave device. If set to "host" it means all reserved device memory on this platform should be checked to reserve regions in this VM's address space. This global RDM parameter allows the user to specify reserved regions explicitly, and using "host" includes all reserved regions reported on this platform, which is useful when doing hotplug.

By default this isn't set so we don't check all RDMs. Instead, we just check the RDM specific to a given device if we're assigning this kind of a device. Specifies how to deal with conflicts when reserving already reserved device memory in the guest address space.

Specifies that in case of an unresolved conflict the VM can't be created, or the associated device can't be attached in the case of hotplug. Specifies that in case of an unresolved conflict the VM is allowed to be created but may cause the VM to crash if a pass-through device accesses RDM.

Determines whether a kernel based backend is installed. If this is the case, pv is used, otherwise qusb will be used. For HVM domains devicemodel will be selected. Specifies the usb controller version. Possible values include 1 USB1. Default is 2 USB2. Value 3 USB3. Specifies the total number of ports of the usb controller. The maximum number is The default is 8. With the type devicemodel the number of ports is more limited: a USB1. USB controller ids start from 0.

In line with the USB specification, however, ports on a controller start from 1. If no controller is specified, an available controller:port combination will be used. If there are no available controller:port combinations, a new controller will be created. The port option is valid only when the controller option is specified. Specifies the host PCI devices to passthrough to this guest. See xl-pci-configuration 5 for more details. See permissive above. See msitranslate above. See seize above. Enable graphics device PCI passthrough.

Most graphics adapters require vendor specific tweaks for properly working graphics passthrough. Note that this behavior is only supported with the upstream qemu-xen device-model. Having multiple RDM entries would worsen this and lead to a complicated memory layout.

Here we're trying to figure out a simple solution to avoid breaking the existing layout. When a conflict occurs,.

Specifies the host device tree nodes to passt hrough to this guest. GFN specifies the guest frame number where the mapping will start in the guest's address space.

All of these values must be given in hexadecimal format. If vuart console is enabled then irq 32 is reserved for it. Limit the guest to using at most N event channels PV interrupts. Guests use hypervisor resources for each event channel they use. The default of should be sufficient for typical guests. The maximum value depends on what the guest supports.

Other guests are limited to bit x86 and ARM or bit x See display protocol for details. Restrict the device model after startup, to limit the consequencese of security vulnerabilities in qemu. Specifies the virtual sound cards to be provided to the guest. The virtual sound card has hierarchical structure. Every card has a set of PCM devices and streams, each could be individually configured. The child item treated as belonging to the previously defined parent item.

There are group of parameters which are common for all items. This group can be defined at higher level of the hierarchy and be fully or partially re-used by the underlying layers. These parameters are:. Every underlying layer in turn can re-define some or all of them to better fit its needs.

For example, card may define number of channels to be in [1; 8] range, and some particular stream may be limited to [1; 2] only. The rule is that the underlying layer must be a subset of the upper layer range.

Arm only. Set TEE type for the guest. This is the default value. OP-TEE itself may limit the number of guests that can concurrently use it. Either kernel or bootloader must be specified for PV guests. Append ARG s to the arguments to the bootloader program. Alternatively if the argument is a simple string then it will be split into words at whitespace this second option is deprecated. Selects whether to expose the host e memory map to the guest via the virtual e When this option is false 0 the guest pseudo-physical address space consists of a single contiguous RAM region.

When this option is specified the virtual e instead reflects the host e and contains the same PCI holes. The total amount of RAM represented by the memory map is always the same, this option configures only how it is laid out. Exposing the host e to the guest gives the guest kernel the opportunity to set aside the required part of its pseudo-physical address space in order to provide address space to map passedthrough PCI devices. It is guest Operating System dependent whether this option is required, specifically it is required when using a mainline Linux "pvops" kernel.

This option defaults to true 1 if any PCI passthrough devices are configured and false 0 otherwise. If you do not configure any passthrough devices at domain creation time but expect to hotplug devices later then you should set this option.

Conversely if your particular guest kernel does not require this behaviour then it is safe to allow this to be enabled but you may wish to disable it anyway. Note: multiple options can be given and will be attempted in the order they are given, e. If thise mode is specified xl adds an emulated IDE controller, which is suitable even for older operation systems.

It decreases boot time but may not be supported by default in older operating systems, e. Windows XP. The following options control the mechanisms used to virtualise guest memory. The defaults are selected to give the best results for the common cases so you should normally leave these options unspecified. Turns "hardware assisted paging" the use of the hardware nested page table feature on or off. Use of HAP is the default when available. Turns "out of sync pagetables" on or off.

However, this may expose unexpected bugs in the guest, or find bugs in Xen, so it is possible to disable this feature. Use of out of sync page tables, when Xen thinks it appropriate, is the default. Number of megabytes to set aside for shadowing guest pagetable pages effectively acting as a cache of translated pages or to use for HAP state. You should not normally need to adjust this value. However, if you are not using hardware assisted paging i. The following options allow various processor and platform level features to be hidden or exposed from the guest's point of view.

This can be useful when running older guest Operating Systems which may misbehave when faced with more modern features. In general, you should accept the defaults for these options wherever possible. Select the virtual firmware that is exposed to the guest. By default, a guess is made based on the device model, but sometimes it may be useful to request a different one, like UEFI. Override the path to the blob to be used as BIOS.

You should not normally need to specify this option. PAE is required if you wish to run a bit guest Operating System. In general, you should leave this enabled and allow the guest Operating System to choose whether or not to use PAE. X86 only. This option is enabled by default and usually you should omit it. This option is true for x86 while it's false for ARM by default. True 1 by default. False 0 by default.

This option has no effect on a guest with multiple virtual CPUs as they must always include these tables. You can find out details of the Debian installation process from the Debian documentation.

If you've got any hardware you're not sure open source drivers are available for , you may want to install non-free firmware files via:. We've still got a few more steps to complete before we're ready to launch a domU, but let's install the Xen Project software now and use it to check the BIOS settings.

All of this can be installed via an Apt meta-package called xen-linux-system. A meta-package is basically a way of installing a group of packages automatically. Apt will of course resolve all dependencies and bring in all the extra libraries we need.

Now we have a Xen Project hypervisor, a Xen Project kernel and the userland tools installed. When you next boot the system, the boot menu should include entries for starting Debian with the Xen hypervisor. One of them should be highlighted, to start Xen by default. Do that now, logging in as root again.

Next, let's check to see if virtualization is enabled in the BIOS. There are a few ways to do that. The most comprehensive is to review the Xen section of dmesg created during the boot process. This will be your first use of xl, the very versatile Xen tool, which we will come back to shortly to create and manage domUs:.

If nothing comes back and you think it should, you may wish to look through the flags yourself:. If the virtualization extensions don't appear, take a closer look at the BIOS settings.

A few round-trips through the BIOS are often required to get all the bits working right. It is a technology that allows Linux to manage block devices in a more abstract manner.

Because of this abstraction logical volumes can be created, deleted, resized and even snapshotted without affecting other logical volumes. LVM creates logical volumes within what is called a volume group, which is simply a set of logical volumes that share the same physical storage, known as physical volumes.

The process of setting up LVM can be summarized as allocating a physical volume, creating a volume group on top of this, then creating logical volumes to store data. Because of these features and superior performance over file backed virtual machines we recommend the use of LVM if you are going to store VM data locally.

Ok, now LVM has somewhere to store its blocks known as extents for future reference. Now LVM is setup and initialized so that we can later create logical volumes for our virtual machines. More on LVM on Debian here. If you already have a volume setup that you would like to copy, LVM has a cool feature that allows you to create a CoW copy on write clone called a snapshot.

This means that you can make an "instant" copy that will only store the changes compared to the original. There are a number of caveats to this that will be discussed in a yet unwritten article. The most important thing to note is that the "size" of the snapshot is only the amount of space allocated to store changes.

So you can make the snapshot "size" a lot smaller than the source volume. Next we need to set up our system so that we can attach virtual machines to the external network. This is done by creating a virtual switch within dom0. The switch will take packets from the virtual machines and forward them on to the physical network so they can see the internet and other machines on your network. The piece of software we use to do this is called the Linux bridge and its core components reside inside the Linux kernel.

In this case, the bridge acts as our virtual switch. The Debian kernel is compiled with the Linux bridging module so all we need to do is install the control utilities:.

Management of the bridge is usually done using the brctl command. Open this file with the editor of your choice.

If you selected a minimal installation, the nano text editor should already be installed. Open the file:. If you get nano: command not found , install it with apt-get install nano. If you are using static addressing you probably know how to set that up. As well as adding the bridge stanza, be sure to change dhcp to manual in the iface eth0 inet manual line, so that IP Layer 3 is assigned to the bridge, not the interface.

Now restart networking for a remote machine, make sure you have a backup way to access the host if this fails :. If all is well, the bridge will be listed and your interface will appear in the interfaces column:.

If the bridge isn't operating correctly, go back and check the edits to the interfaces file very carefully. Reboot before continuing. During the reboot, note the list of OS choices and check to see what the default start-up choice is. If both the start-up default is fine, skip the next section and go directly to Basic Xen Project Commands.

GRUB, the bootloader installed during installation, tells the computer which operating system to start and how. This is used in the host by the vif-route hotplug script. See wiki for guidance and examples. If the domain is an HVM domain then the associated emulated tap device will have a "-emu" suffice added.

Specifies the hotplug script to run to configure this device e. What, if any, effect this has depends on the hotplug script which is configured.

A typically behaviour exhibited by the example hotplug scripts if set might be to configure firewall rules to allow only the specified IP address to be used by the guest blocking all others. Specifies the backend domain which this device should attach to. This defaults to domain 0. Specifying another domain requires setting up a driver domain which is outside the scope of this document.

Specifies the rate at which the outgoing traffic will be limited to. The default if this keyword is not specified is unlimited.



0コメント

  • 1000 / 1000