Virtual Guru's Blog – Home of Virtualization Workshops

January 12, 2010

OpenSolaris 200906 Virtualization Assistant Live-CD Proof of Concept

Filed under: jeos, opensolaris, Virtualization — natiku @ 4:13 am

Create a OpenSolaris 200906 Helper Live-CD Proof of Concept with can help End Users and Developers to deliver faster VM Images/Templates/Appliances based on current mainline OpenSolaris 200906 release.
VA Live-CD will as PoC (Proof of Concept) demonstrate all known and needed steps of VM creation as modules or/and their recipes taking as a sample building process of OpenSolaris 200906 JeOS VM install and creation itself.

Content

  1. OpenSolaris 200906 Virtualization Assistant Live-CD PoC
  2. Project Motivations
  3. Target Audience of OpenSolaris 200906 Virtualization Assistant Live-CD Prototype
  4. Planned Functionality of OpenSolaris 200906 Virtualization Assistant Live-CD PoC
    1. Virtualization Assistant Live-CD PoC Deliverables: Media and Repository
      1. Virtualization Assistant Live-CD PoC Media Download
      2. Virtualization Assistant Live-CD PoC Mercurial repository
    2. VA Live-CD Functionality – Separation onto 2 sub-projects
    3. Content
    4. OpenSolaris 200806 Migration Assistant Live-CD
    5. OpenSolaris 200906 VA Live-CD Main Design Concepts
      1. AI Client in Live Media Based Rich Recovery Console
      2. Wrapping Functionality in “Modules”
      3. Modules are using Critical Check Points and Acummulated Return Values
      4. Overlays for delivering VA Live-CD, AI and Install framework Fast Fixes
      5. Include DC and all Build Recipe on Live-CD Media Itself
      6. Only Virtualization Platforms and x86 32bit support (and SPARC LDoms)
      7. Maximally Leverage OpenSolaris 200906 Distro Constructor
      8. Include Headless Mode with SERIAL port access (OS Kernel and Grub Menu)
      9. Include SSH key based authentication for better scripting
      10. Support for Alternate Install Scenarios: Fully Local Installation
      11. Support for Alternate Install Scenarios: ZFS Flash like install
      12. Setting up JeOS logins (not default OpenSolaris ones)
    6. Part 1: Building VA Live-CD Media with Distro Constructor
      1. VA Live-CD Media Distro Constructor Recipe
      2. VA Live-CD Media Distro Constructor Helper Files
      3. Customize VA Live-CD by Generating Clone from Live-CD
        1. Sample 1: Regeneration of VA Live-CD with Customizations (lofs)
        2. Sample 2: Generation of VA Live-CD from itself with added WiFi Intel …
      4. VA Live-CD media Costs and Sizes (Architectures Comparisons)
    7. Part 2: Virtual Assistant Live-CD Modules for VM Builders
      1. What are VA Live-CD modules and where they come from ?
      2. Module for testing main VA Live-CD functionality
        1. Selecting Best Strategy for Repeatable AI based JeOS Installations
      3. Module with common functionality
      4. Module for Local IPS Repository Recipe
      5. Module for Installing OpenSolaris 200906 JeOS
      6. Module for cleaning OS Runtime Data
      7. Module for HW arch change and HW Reconfiguration
      8. Module for securing installed OS instance
      9. Module shrinking disk with ZFS send/receive with 2 disks
      10. Module shrinking disk with ZFS send/receive with archive NFS
      11. Module for Virtual Hardware support
    8. VA Live-CD and JeOS Support for 10+ Most Popular Virtualization Platforms
      1. List of Know to Work Virtualization Platforms (x86 , x86-64)
      2. List of Know to Work Virtualization Platforms (SPARC SUN4V)
      3. HW Arch Check: xVM Hypervisor Para mode
      4. HW FAST FIX: VGIRUni network driver for Parallels
        1. Create local IPS repository on port 80
        2. Publish LOCALni into IPS repository
      5. HW FAST FIX: Update Dnet driver from B113
    9. Creating “instance” zpool for Live-CD custom data


    Project Motivations

    During creation of OpenSolaris 200811 JeOS we create a JeOS based V2V Virtualization-2-Virtualization Live-CD x86 Migration Assistant (105MB download size), now when in OpenSolaris 200906 Automated Installer functionality was significantly improved, we can build more advanced PoC based on OpenSolaris 200906 Live-CD JeOS based media which can allow direct install from repository of fully CLI mode OpenSolaris 200906 based installation, without need to setup local AI install server.

    Target Audience of OpenSolaris 200906 Virt Assist Live-CD Prototype

    • Primary audience are OpenSolaris 220906 Virtualization End Users who need to solve their VM creation, distribution and customization tasks.
    • Secondary audience are OpenSolaris Developers who will like to experiment with CLI Install framework from Proof-of-Concept point of view,
      both from OpenSolaris development and VM Images/Templates/Appliances tasks automation.

    Planned Functionality of OpenSolaris 200906 Virt Assist Live-CD PoC

    Primary functionality is to be able install OpenSolaris 20906 JeOS Prototype with optional customizations as part of OpenSolaris JeOS project.

    VM Creator Focused
    • A) Small “Recovery Console” like functionality for Virtual Environments based on OpenSolaris 200906 JeOS package list
    • B) Easy iPKG/Repo AI Client based install of OpenSolaris 200906 JeOS into any of 10+ most popular Virtualization Platforms with optional customization options
    • C) Easy cleaning of installed OpenSolaris systems with ZFS snapshot/ZFS streaming
    • D) Easy ZFS stream install of OpenSolaris 200906 JeOS ZFS snapshot into any of 10+ most popular Virtualization Platforms
    • E) Easy HW migration between 10+ most popular Virtualization Platforms (Skipped, due its marketing sensitivity – need to be approved to be implemented)
    VM Developer Focused
    • I. Will display IP on boot so developer don’t need login on dynamic DHCP/IP assignments
    • II. SSH in will be allowed only based on keys and will implement mechanism to deliver ssh key and instance data with virtual disk instance zpool
    • III. Will able to clone itself for customization with AI client profile , create distro constructor VM Environment for VA Live-CD customizations (Add 64 bit, Add more Drivers, Add bug fixes)
    • IV. Will be able to create a VM PKG REPO freezes with just selected package lists for fully local development
    • V. Call-Home feature to be able to deliver news to users/developers and get usage stats

    Virtual Assistant Live-CD PoC will be designed to help off load from VM creators not play-load / content related based tasks in VM Images / Templates / Appliance build process
    especially Steps 1 and 4-7 with focus on their possible future automation in standardized environment of Virtual Assistant Live-CD PoC.

    See more whole 7 step process in deep in Virtual Appliances Workshop

    We will using OpenSolaris 200906 JeOS creation process to demonstrate most important steps in VM creation process and if time permits we will also try to create a sample application based VM Template using Virtual Assistant Live-CD PoC to speed up process.

    We will also consider OpenSolaris 200906 Virtual Assistant Live-CD Proof Of Concept to be sample for improvements which are needed or are welcome to be implemented in next released of Distro Constructor and Automated Installerplus upcoming VM constructor.

    Virtualization Assistant Live-CD PoC Deliverables: Media and JeOS Repository

    Virtualization Assistant Live-CD PoC Media Download

    Note: This Live-CD is designed to boot only 10+ most popular Virtualization Formats, same as ones used for JeOS Prototype, you can custom Live-CD by regenerating updated one directly from Live-CD itself.

    Virtualization Assistant Live-CD PoC & JeOS Mercurial repository

    Web interface

    >

    Getting local copy with Mecrurial

    Getting OpenSolaris JeOS project repository with mercurial client

    pkg install SUNWmercurial
    mkdir ~/jeosprotorepo
    cd  ~/jeosprotorepo
    hg clone  ssh://anon@hg.opensolaris.org/hg/va-live-media
    

    Note: Strips in this repository are for demonstrating Prroof of Concept and in many cases it just assumes default values or system configurations.

    VA Live-CD Functionality – Separation onto 2 sub-projects

    OpenSolaris 200806 Migration Assistant Live-CD

    For inspiration what was already done for v2v_migration_assitant_live-cd.iso OpenSolaris 200806 Migration Assistant Live-CD ISO (105MB) (Internal SUN Link), its mostly just Recovery like console image, but it demonstrates all principles.

    Logins
    • User: osol/justone1
    • To get root type pfexec su – root

    Yes, opposite to default AI image we already give default use a ROOT role.

    OpenSolaris 200906 VA Live-CD Main Design Concepts

    VA Live-CD prototype is focused on current (actual) OpenSolaris 200906 release, so many scripts and other deliverables described bellow must be treated just as Proof-of-Concepts, which can be immediately “Smell and Touched”.

    Design concepts bellow must be also taken in angle of main task – build in scripted way good OpenSolaris 200906 JeOS and usage of described concepts in other contexts must be carefully re-evaluated and may be even re-implemented after practical hands on experience.

    AI Client in Live Media Based Rich Recovery Console

    Install of customized AI manifest from Live Booted CD Media is main functionality of VA Live-CD, but original PXE/WANBOOT based AI Images are focused on Unattended Installations with technical limitations of network based boots, where all “media” must fit in memory during execution. This make strong pressure towards media sizing and in response limit number of packages which we can effectively add to AI network based Install Media

    When we put AI Client on Live-CD, there is not any more so strong pressure for size, because most part of Live-CD content will be back mounted back to media. This open space to put on Media much more Richer User Environment, so when user need to drop to shell to make some sort of debugging, post install customization or fix, there will be all needed tools for wider user audience from Solaris experienced administrator to Linux Click and Point one. Experience from last months show then “Drop to Shell” is used in Virtualization Related projects quite often, because of “rapidly changing” nature of Virtualization Platforms itself.
    In context of VA Live-CD I define this Richer User Land as subset of JeOS packages, mainly I reduce some high level programming stuff. All debug and user land communication tools, docs and GNU Linux friendly environment are preserved.

    Such Rich User Land environment represent good starting point for direct evaluations and for future VA Live-CD like Media development. Current ISO media sizes of VA Live-CD is {~110MB} for x86 32 bit 10+ Virtual Platforms centric Prototype, for SPARC SUN4V centric PoC size is around {~225MB} (There is really not so much more stuff on SPARC media, just media build process is different, I will investigate it)

    Note: On VA Live-CD AI Client will not start automatically by default, we will offer control and access mechanism (SSH with key based login and pre-loaded user generated Key) then service run if we need to make scripted or automated AI Client execution.

    Wrapping Functionality in “Modules”

    VA Live-CD and its DC build recipe will be execution environment for different VM Templates task related Modules which will implement particular function, this will introduce more flexibility when we can even as part of various Prototypes and Proof-Of-Concepts deliver more (alternate) modules for similar functionality.

    Modules are using Critical Check Points and Acummulated Return Values

    Build scripts in modules are prototypes, they are not planed to be productized, so if needed we utilized only minimum of ‘Critical Check Points, instead of intensive error checking after each command.

    When needed we use a variable with Accumulated Return Values to check a sequence of commands.

    Overlays for delivering VA Live-CD, AI and Install framework Fast Fixes

    We will make all need VA Live-CD modifications like on AI and Install frameworks On-the-fly, so we can always fall back to original fully supported versions, we will use a overlay trick to implement this:

    mount -F lofs -O /tmp/liborchestrator.so.1 /usr/snadm/lib/liborchestrator.so.1
    execute AI client install related code here
    umount /usr/snadm/lib/liborchestrator.so.1
    

    This principle can be applied to whole directories too.

    Include DC and all Build Recipe on Live-CD Media Itself

    Main idea here is when I need to fix something on VA Live-CD, which can’t be fixed with overlays, like drives I don’t need to go back to original DC Build environment, I just need “a” disk with zpool and I can trigger new build directly from booted Live-CD itself.

    Only Virtualization Platforms and x86 32bit support (and SPARC LDoms)

    This is purely motivated by reduced media size and also it has positive effects on media complexity – scope (number of packages and build time)

    • Only 10+ x86 Virtualization Platforms (Limited number of drives save space and also will prevent users to run VA Live-CD media prototype on real HW where they can delete their stuff)
    • Only 32bit support (This is purely download size issue, we will save about 35-50MB, all 64 bit stuff will be present, limitation is only on build time, user can rebuild with 64 bit support even from Live-CD itlelf)
    • LDoms (SPARC) 64 bit environments, but effectively it will be only SUN4V so save media size, now we have PoC versions for LDoms 1.2

    Maximally Leverage OpenSolaris 200906 Distro Constructor

    • We will try to use as much possible code and scrips from Live-CD creation by utilizing original 200906 Distro Constructor, it already works well for OpenSolaris 20811.
    • Harmless messages during Live-CD DC build process will be ignored, however I will try to fix all issues in CLI based Live-CD execution, so we will be clean from nasty no GUI error messages in boot time

    Include Headless Mode with SERIAL port access (OS Kernel and Grub Menu)

    With it we can leveradge VirualBox Headless Mode, GRUB will be configured to support both Display and Serial, where default will be Display.
    We can easily script access serial access to select right predefined menu with kernel enabled SERIAL output, not only kernel but also GRUB will be using serial console.

    Include SSH key based authentication for better scripting

    I implement SSH keys based login, where keys are stored and loaded from on additional virtual disk, so we can more easily and securely use network based access and even stripping.
    Attaching additional virtual disk can give use also a chance to send to Va Live-CD some data, i case then we wont some scripted/customized functionality – in OVF instance data style.

    Support for Alternate Install Scenarios: Fully Local Installation

    Making 50 installations per day, I found then I can significantly speed up install by having repository fully locally in VM (on Local Disk and on Localhost Network).
    This concept can be evaluated for smaller CLI based install profiles – where we will fully local IPS repository directly on local media, Linux server live CD installs ?

    Support for Alternate Install Scenarios: ZFS Flash like install

    During installation of OpenSolaris 200811 JeOS ZFS Flash like install was used to shrink disk size of ZFS based disk installations and also as fast mitigation for HW changes issues.
    In OpenSolaris 200906 JeOS we will like to reuse experience from Solaris 10 ZFS flash implementations and script various ZFS Flash like installations for

    Setting up JeOS logins (not default OpenSolaris ones)

    • User will be “OSOL”, password “justone1″, root one “osol0906″, this way we will highlight then this project is Proof of Concept with potentially limited functionality and support

    PS: Current DC don’t have functionality to change credentials, so I will to make coupe DC build time hacks to live media boot services to achieve this.

    Part 1: Building VA Live-CD Media with Distro Constructor

    VA Live-CD Media serve 3 main purposes:

    • Recovery Console = For debugging issues with OpenSolaris installations in most popular 10+ Virtualization Platforms, needed set of programs is based on JeOS
    • Install and Build = For installing OS with AI in IPS format, to install SW in various formants and also for all aditional VM creation operations like disk optimization (archivers)
    • Customize Self = For creating new customized VA Live-CD media for example with new drivers or with new (added) Virtual Assistant Modules (like for example with security hardening recipes)

    VA Live-CD Media Distro Constructor Recipe

    Docs: Distro Constructor Guide and Slim Install and Live-CD TOI

    Main file is DC XML manifest va_template_live-cd.xml located on VA Live-CD media /usr/share/distro_const/virt_assist/, 2 different XML manifest are created with va_template_gen.sh script.

    When you try to customize Live-CD itself you need to create zpool buildpool, see more information about usefull wrapper scripts bellow in Customize VA Live-CD by Generating Clone from Live-CD

    For x86 architecture (Run on x86 system – VirtualBox 3.x)
    cd /usr/share/distro_const/virt_assist
    distro_const build va_x86_live-cd.xml
    
    For SPARC architecture (Run on SPARC system – Ldoms 1.x )
    cd /usr/share/distro_const/virt_assist
    distro_const build va_sparc_live-cd.xml
    

    DC build process consist from this steps and their respective scripts:

    VA LIVE-CD Source Description Scripts locations
    im-pop DC Original Populate the image with packages
    im-mod DC Original Image area modifications /usr/share/distro_const/pre_bootroot_pkg_image_mod
    va-im-mod VA Proto VA pkg image Modifications
    Here 99% of VA Live-CD modifications live
    All needed steps are commented in code
    /usr/share/distro_const/virt_assist/va_pre_bootroot_pkg_image_mod
    va-im-sun4v VA Proto VA pkg image Modifications (SUN4V cleaning)
    Here I try to clean SPARC to have just SUN4V arch
    /usr/share/distro_const/virt_assist/va_pre_bootroot_pkg_image_sun4v
    slim-im-mod DC Original Slim CD Image area Modifications /usr/share/distro_const/slim_cd/slimcd_pre_bootroot_pkg_image_mod
    br-init DC Original Boot root initialization /usr/share/distro_const/bootroot_initialize.py
    slim-br-config DC Original Slim CD boot root configuration /usr/share/distro_const/slim_cd/slimcd_bootroot_configure
    va-br-config VA Proto VA boot root configuration
    Just small fix for LOGIN NAMES
    /usr/share/distro_const/virt_assist/va_bootroot_configure
    br-config DC Orignal Boot root configuration /usr/share/distro_const/bootroot_configure
    va-post-br-config VA Proto VA boot root configuration
    Just FIX hostname in this stage
    /usr/share/distro_const/virt_assist/va_post_bootroot_configure
    br-arch DC Original Boot root archiving (32bit for x86 or sparc for SPARC) /usr/share/distro_const/bootroot_archive
    slim-post-mod DC Original Slim CD post bootroot image area modification /usr/share/distro_const/slim_cd/slimcd_post_bootroot_pkg_image_mod
    va-grub-setup VA proto VA Grub menu setup (x86)
    We just generate CLI based Live-CD grub menus on X86 platforms
    /usr/share/distro_const/virt_assist/va_grub_setup
    post-mod DC Original Post bootroot image area modification /usr/share/distro_const/post_bootroot_pkg_image_mod
    iso VA proto ISO image creation
    Just added right code to generate SPARC ISOs
    /usr/share/distro_const/va_x86_sparc_create_iso

    VA Live-CD Media Distro Constructor Helper Files

    We need some helper files to make build scrips more readable and smaller.

    Here is structure of Helper Files located on VA Live-CD media /usr/share/distro_const/virt_assist/

    Name Type Purpose
    README.txt File Small basic info
    LICENSE File All stuff is CDDL
    .build File Build number to track version (To Do add build to CD name automatically – 32chars limitation !!!)
    customize.me Directory Sample how VA Live-CD can be rebuild itself with user changes
    fastfixes Directory Files with Fast Fixes to be copied to VA Live-CD during build process
    license.media Directory Pre-gerated License for VA Live-CD (In style approved for JeOS
    addons Directory Add-On Scripts a files, some of them are shared with JeOS
    localrepo Directory Repository with FasFixes in IPS format (Now LOCALmi driver, add dnet in future?, common stiff with JeOS ?)
    localrepoup.sh Script This script will start local IPS repository, we need it for (re)build process
    .pkgsnames-VA.i386.lst List Full list of packages (x86) in VA Live-CD generated during build process
    .pkgsfmris-VA.i386.lst List Full list of FMRIs (x86) in VA Live-CD generated during build process
    .pkgsnames-VA.sparc.lst List Full list of packages (SPARC) in VA Live-CD generated during build process
    .pkgsfmris-VA.sparc.lst List Full list of FMRIs (SPARC) in VA Live-CD generated during build process
    org.files Directory Original files for DC and Live media stiff with diffs for quick comparisons
    porting Directory Some infos from porting to SPARC (SUN4V LDoms)
    sshvmkeys Directory Archive with SSH keyas for SSH access mechanism

    Customize VA Live-CD by Generating Clone from Live-CD

    VA Live-CD will be able to clone self and create new ISO with additional packages, in our case on x86 with Intel WiFi drivers, sample is in /usr/share/distro_const/virt_assist/customize.me/addwifidrv.i386.sh

    For this module you need VM configured with: VCPU=1 Memory=1024MB VDISK=4GB NET , this module must be executed in context of user ‘root’ or ‘privileged user’

    Hint: To make clone you need to have network, best is to wait until you see IP: xxx.xxx.xxx.xxx message on console, before you log in. (You can check in script presence of /.nwamhaveip file.)

    You need to create zpool buildpool on local disk for DC build area

    Creating zpool ‘buildpool’ for DC build process:

    1. Boot VA Live-CD in VM with atached empty (new) 4GB Virtual Disk
    1. Find avaiable disks with test_td command”
       test_td -d all
    Disk discovery
    Total number of disks: 3
    ---------------------------------
    num |    name|  ctype|size [MB]|
    ---------------------------------
    1 |* c4t0d0|unknown|     4096|
    

    3. Create zpool buildpool on selected disk (here c4t0d0)”

       zpool create -f buildpool c4t0d0"
    

    4. Now re-run custom build process rerunnig “regenaret-cd.sh”

    Notes
    • We use a zpool ‘buildpool’ so you can use a restarting DC build process from previous steps, you can add to sample scrips param ‘-l’ or ‘-r step’
    • During DC build process you can see many harmless messages, see ‘harmless.txt’

    Sample 1: Regeneration of VA Live-CD with Customizations (lofs)

    Because /usr/… area is on Live-CD is read-only (RO), you need to copy module you want to modify fist to memory (/tmp) and after mount module it back with lofs file system:

    1. mkdir /tmp/distro_const
    2. cp -a /usr/share/distro_const/virt_assist/ /tmp/distro_const/virt_assist/
    3. mount -F lofs -O /tmp/distro_const/virt_assist/ \
    /usr/share/distro_const/virt_assist/
    4. Customize code in /usr/share/distro_const/virt_assist/
    5. + Put custom build info in /usr/share/distro_const/virt_assist/.build
    6. Regenerate VA-Live-CD with:
    /usr/share/distro_const/virt_assist/customize.me/regenerate-cd.sh
    

    Sample 2: Generation of VA Live-CD from itself with added WiFi Intel drivers (for x86)

    Added 4 !WiFI drivers (SUNWiwh,SUNWiwi,SUNWiwk) and WiFi tools (SUNWwlan,SUNwpa) with editing in /tmp

    /usr/share/distro_const/virt_assist/customize.me/addwifidrv.i386.sh

    VA Live-CD media Costs and Sizes (Architectures Comparisons)

    VA Platform Build PKGs Optimizations ISO size Download zip -9 size solaris.zlib (lzma) misc.zlib (lzma) Root Disk ISO size Root Disk Usage
    Size | Used | Avail | Use%
    Memory used after login
    x86 079 129 x86 32bit boot archive only 124MB 108MB 73MB 4.7MB 27M (x86.microroot.gz)
    90MB (x86.microroot)
    85 863 | 74 930 | 109 33 | 88% 250MB
    SPARC 079 128 SUN4V,LDoms 194MB 125MB 66MB 4.2MB 113M (boot_archive) 107 479 | 68 559 | 38 920 | 64% 334MB
    Notes
    • Original SPARC VA Live-CD media have more then 224MB !
    • On SPARC 113MB boot archive get 56M zip -9, after zero it it I get 39M zip -9 , so we can save 17MB on ZIP download size just by zero rootdisk
    • MEM usage looks good, hope we can install with just 512MB having all SWAP/DUMP stuff on ZFS

    Part 2: Virtual Assistant Live-CD Modules for VM Builders

    Modules a directories on VA Live-CD under /usr/share/virt_assist/ , their represent a Recipes for one of VM Building tasks. In most cases module are sample scripts, but they can be also an execution steps or even links to actual information, if they was not scripted yet.

    Important note: Some of the modules stretching boundaries of what is implemented in standard OpenSolaris 200906 release , they implement workarounds, most of them are map to some reported issues during 200811 development and such workarounds will not be needed, when issues will be fixed in DEV of 201002 or when they will be delivered as new functionality as for example ZFS flash support.

    What are VA Live-CD modules and where they come from ?

    Main idea of VA Live-CD are modules with recipes which will demonstrate some basic functionality needed for building good VM which can potentially automated in future, this modules with recipes don’t need to by tight to any future automation environment, they can be also optional strep like Solaris Hardening or even for now there can be a just links to info sources. Main reason is then I don’t know which modules will be appropriate to automate for now and if tiier functionality will be needed in future releases – they can be predicated by functionality implemented directly in next OpenSolaris releases.

    VA Live-CD is in this stage a pre-step for automation in chain as we normally do:

    1. Try it manually first
    2. Make a basic WiKi with steps
    3. Make a rich WiKi with can be processed
    4. Make a one purpose PoC Proof-of-Concept script
    5. Analyze most common use cases
    6. Analyze potential customization parameters
    7. Automate with more advanced scripting

    Our experience is how then if we focus on Automation in too early stages this slow down a process. So current ideas of VA Live-CD of modules to be an extension WiKis or other sources which are sometimes sometimes hard to follow , to implement stage “4. Make a one purpose Proof-of-Concept strip”. Script itself are more samples and in many case assume customization as part of their usage.

    Module for testing main VA Live-CD functionality

    • Module testdrive.default test AI Client functionality with default.xml manifest (Full GUI Live-CD like installation profile) {x86 and SPARC}
    • Module testdrive.JeOS test AI Client functionality with JeOS.$myarch.xml manifest (Set of packages selected to be in OpenSolaris 200906 JeOS prototype) {x86 and SPARC}
    • Module testdrive.JeOSlr test AI Client functionality with JeOS-lr.$myarch.xml manifest based on local IPS repository and IPS cluster on set packages from previous test {x86 and SPARC}

    Selecting Best Strategy for Repeatable AI based JeOS Installations

    Install times are generated on VMware Workstation 6.5 on Windowws XP 64bit host (Intel Core9i with 9GB RAM) , reason is then VBox don’t limit usage of HOST cache which will have influence on measurements. Start time is fist lime in AI log, end time is last line there.

    Installation Source Source detail Inst Type Time Comment
    Public Network with no Mirror pkg.opensolaris.org 160 PKGs 1h13m48s
    Public Network with Local Mirror pkg.opensolaris.org+ipkg.czech.sun.com:8000 160 PKGs 0h45m20s
    SUN Network with no Mirror ipkg.sfbay.sun.com 160 PKGs 0h38m22s
    SUN Network with Local Mirror ipkg.sfbay.sun.com+ipkg.czech.sun.com:8000 160 PKGs 0h33m22s
    1GB Local Subnet Network with Full Local Mirror jsc-repo-a.czech.sun.com:9060 160 PKGs 0h16m11s
    160 PKGs IPS REPO local zpool on second virtual disk 160 PKGs 0h12m11s
    160 PKGs IPS REPO local zpool on second virtual disk metapackage 0h9m41s
    160 PKGs IPS REPO local zpool (atime=off) on second virtual disk metapackage 0h7m42s
    Full Local Mirror from DVD on local zpool second virtual disk 160 PKGs 0h43m23s
    Full Local Mirror from DVD on local zpool (atime=off) second virtual disk 160 PKGs 0h33m55s

    Current OpenSolaris 20906 REPOs complexity (for comparison)

    Repo Type DU Size Packages Dirs Files
    Full Net 21GB 24 602 700 519 723 261
    DVD 7.2GB 1 707 305 060 304 860
    JeOS 530MB 170 4 3210 42 981

    Files are in 200906 IPS repo are under directory structure is like this 09/3187e6, looks like because second part of directory is tool long, we have practicaly mapping 1:1 directory – files ?

    Notes:

    Module with common functionality

    Module common.lib – Some scripts which are share across other modules

    For example code for mount/umount OpenSolaris installation altroot

    Module for Local IPS Repository Recipe

    Module repo.clone.JeOS – sample how to create and use a fully local IPS repository clone with all 155 JeOS packages, its really local: local disk and localhost network.

    Separate repositories are creates for x86 and SPARC, because repositories will be used on different native platforms – binary files for opposite platforms are simple filtered out.

    IPS install process is fully depend on network, l having local IPS repository with just needed packages siting in same VM and using local networking can rapidly speed up installations.

    For example JeOS installation can be speed upto 5-10 times by using fully local IPS repository with just needed 155 packages.

    Blogs: Home No-Network needed OpenSolaris Live-CD IPS REPO full mirror
    Blogs: Updated script for fully local IPS mirror for faster DC and AI experimenting

    This local installation can be also performed in future from install media, PKG have ability to run server from RO install media, where writable parts is on /TMP

    Module for Installing OpenSolaris 200906 JeOS

    Module install.JeOS

    This module will install OpenSolaris 200609 JeOS with all needed post install customizations.

    It consist from JeOS-install.sh which process main install functionality and which call JeOS-postconfig.sh with default customizations.

    JeOS-install.sh [-debug] {--publicrepo|--custom|--localrepo} {--harden|--noharden}
    -debug - activate debug output mainly for AI Clinet amd related frameworks
    --publicrepo - install JeOS from network based public repo pkg.opensolaris.org
    --custom - install custom JeOS, will see what level of cutomizations we can get
    --localrepo - install JeOS from local IPS repo free as jeos-ai-vm-proto cluster
    local repo must be in atached as disk with zpool 'localrepos'
    zpool 'localrepos' is prepared with module 'repo.clone.JeOS'
    --harden/--noharden - to run/don't run a 'secure.JeOS.isc' in JeOS postinstall
    

    Primary distribution form of JeOS will be pre-installed VM, but w can also consider direct installations or indirect one with ZFS stream.

    Custom is simple during install customization capabilities , not tested yet, will be used in Cloud customizations.

    This module use a local AI Client functionality, which don’t need a AI server and uses a install libraries patches to get consistent install images on low memory footprints (512MB of configured RAM)

    Module for cleaning OS Runtime Data

    Module clean.os.runtime – will clean OS from logs and other informations from previous boots

    Sample how to clean already booted JeOS instance, it can be run from running system or from Live-CD in altroot.

    Module for HW arch change and HW Reconfiguration

    Module reconfig.hw.v2v – will reconfigure HW on Once booted OS instance so it can be imported and booted in diferent Virtual HW environment (V2V Migration)

    Module for securing installed OS instance

    Module secure.JeoS.ISC- this module implementing additional OpenSolaris hardening following “Secure by default” policies

    This module is be based on Glenns ICS – Immutable Service Container blog and Immutable Service Container project and is called as part of install.JeOS module when option –harden is selected.

    Module shrinking disk with ZFS send/receive with 2 disks

    Module stream.l2d.zfs – simple and fast procedure processed on system

    2 disk variant of FLASH like Saving/Restoring OpenSolaris installation with ZFS Streaming

    First bootable disk is empty and all needed information is extracted from second disk with pre-installed OS on ZFS

    This script is local version of ZFS Shrink procedure developed as part of JeOS:

    5993 Document procedure how to Shrink VDisk with ZFS send/receive for P2V or V2V

    In future this script will be improved to incorporate planned ZFS Flash feature.


    You need to attach original disk as second disk and create a new disk as first bootable one,
    t. m. on first bootable IDE or SCSI position to make grub happy:
    1. Backup original disk with OpenSolaris installation
    2. In VM configuration move this disk to another position on IDE or SCSI
    - On VMWare click on Edit VM options, point to disk and click advanced, port.
    - On VirtualBox just change a port to which disk is attached in main VM screen.
    3. Create a new disk with required size and attach it to 1 position on channel
    

    Original disk need to be disconnected before VM reboot otherwise boot loginc in kernel will be dis-oriented and system will
    fail to boot :-)

    Module shrinking disk with ZFS send/receive with archive NFS

    Module stream.sadr.zfs {TBD} – more complex and slow procedure where origin and target systems can be different.

    NFS archive based variant of FLASH like Saving/Restoring OpenSolaris installation with ZFS Streaming

    This script will copy all content of original disk with full ZFS Send / Receive. This script will primary do SHINK (compress) ZFS
    alocated space to by on one continuous place (ZFS Send/Reveive serialize disk data), as a side effect you can also change size ZFS
    container – you can change size of Virtual Disk.

    This script is NFS version of ZFS Shring procedure developed as part of JeOS: 5993 Document procedure how to Srink VDisk with ZFS send/recevie for P2V or V2V.
    In future this script will be improved to incorporate planned ZFS Flash feature.

    For this module you need VM configured with: VCPU=1 Memory=1024MB VDISK=8GB NET

    This module utilize PV (Pive Viwer) to show process on terminal (dev 2 = err) so you will not see any progress if you will redirect err output to log file. This module is non disruptive, there is no zpool renamin like in local one.

    First you need save a ZFS stream of installed system to writable NFS path:
    You need to pass a RW NFS path (hostname with path) for zfs stream archive. ZFS stream and all it needed control information will
    be saved in this dir. Use root user, NFS path must be writable by root and must allready exist.
    stream-save.sh RW-NFS-PATH
    stream-save.sh jsc-ai.czech.sun.com/buildpool/store
    Read stream-save.sh stript for more info.
    Second you can restore OS installation from saved on NFS ZFS stream with
    You need to pass a disk name of disk you want to zfs stream content of rpool to. This disk must be first in system and marked my
    BIOS as bootable (* in list)
    You need to pass a NFS path (hostname with path) for zfs stream archive. ZFS stream and all it needed control information will
    be saved in this dir.
    At the and 'reconfig.hw.v2v' = HW reconfiguration module will be called
    stream-restore.sh [-zerodisk] diskname 'RW-NFS-PATH'
    stream-restore.sh [-zerodisk] diskname jsc-ai.czech.sun.com/buildpool/store
    

    Module for Virtual Hardware support

    Module platform.virt.conf – This moduel include needed JeOS modifications for different HW platforms including some know workarounds for OpenSolaris 200906 on Virtual Platforms

    For example code for xVM-XEN or Amazon EC2 platforms

    VA Live-CD and JeOS Support for 10+ Most Popular Virtualization Platforms

    VA Live-CD have Out-of-the-Box support for 10+ Most Popular Virtualization Platforms is currently based on tests of OpenSolaris 200811 JeOS, so there is an assumption then OpenSolaris 20906 including JeOS will can/work on all of this Virtualization Platforms too at least with support status Reported-to-Work. If it is possible configure Virtualization platform in more HW profiles, we select one as reference Reported-to-Work configuration .

    You can also create a OpenSolaris Containers (Zones) in JeOS instances, but you need keep in mind, then each zones will need at least additional 512MB memory in JeOS VM configuration.

    List of Know to Work Virtualization Platforms (x86 , x86-64)

    • VirtualBox 1.6.x (VDI IDE based configurations)
    • VirtualBox 2.2.4 (VMDK,OVF, ESX compatible SCSI and E1000 configuration)
    • xVM Hypervisor HVM
    • xVM Hypervisor PARA VirtInstal from OpenSolaris Live-CD
    • xVM Server (EA3) {Later integrated into OpsCenter}
    • XEN HVM 3.x
    • XEN PARA 3.x VirtInstal from OpenSolaris Live-CD
    • VMware Workstation 5.x and newer
    • VMware Player
    • VMware Server
    • VMware Fusion
    • VMware ESX 3.5u1 and newer
    • VMware ESXi 3.5u1 and newer
    • QEMU 9.x and newer
    • KVM (any)
    • Parallels Workstation (Any)
    • Parallels Desktop for MAC (Any)
    • Microsoft Virtual PC 2007 SP2
    • Microsoft Hyper-V 1.0 SP2 (Network legacy mode)
    • Microsoft Hyper-V 1.0 R2 (Network legacy mode)
    • Amazon EC2 (Cloud)
    • Sun Clound (on VirtualBox – setup for Java ONE 2009)
    • Citrix Free XenServer (Xen derivate, must be checked)
    Notes
    • Some of this Virtualization Platforms don’t support directly OpenSolaris, where is was possible we select Solaris or Linux 2.6 Other
    • OpenSolaris 200906 Virtual Assistant Live-CD is by default only 32 bit mode.

    List of Know to Work Virtualization Platforms (SPARC SUN4V)

    Generally any SUN4V SPARC (Specifics of SUN4V designing is then there is very small differences between Real HW and Virtualized HW)

    • CPUs: UltraSparc T1,T2,T2+
    • LDoms: v1.2
    Notes
    • Tested primary on: T2000 (T1)

    HW Arch Check: xVM Hypervisor Para mode

    On OpenSolaris 0906 xVM Hypervisor

    mkdir /export/test
    vdiskadm create -s 8g -c "osol0906jeos" /export/test/osol0906jeos
    # jsc-xen-1  129.157.107.126  00:50:56:3f:fb:01
    virt-install -n osol0906jeos -r 1024 --mac 00:50:56:3f:fb:01 -f /export/test/osol0906jeos -f /export/test/instance.raw \
    --paravirt --os-type=solaris  --nographics -l /export/test/osol-0906-VirtAssit-proto-1.0a.iso -x "-B livessh=enable"
    

    HW FAST FIX: VGIRUni network driver for Parallels

    ni is free rtl8029 (NE2000) driver which is needed by Parallels systems, so it need to present on LIVE CD

    Masayuki Murayama Free NIC drivers for Solaris

    Get open source driver for Masayuki Murayama page (Under BSD license):
    mkdir -p /usr/share/distro_const/virt_assist/fastfixes/VGURUni
    cd /usr/share/distro_const/virt_assist/fastfixes/VGURUni
    wget http://homepage2.nifty.com/mrym3/taiyodo/ni-0.8.11.tar.gz
    gtar xvfz ni-0.8.11.tar.gz
    cd ./ni-0.8.11
    make clean
    pkg install SUNWgcc
    make
    rm ./Makefile
    ln -s Makefile.amd64_gcc Makefile
    make
    

    Create local IPS repository on port 80

    mkdir -p /usr/share/distro_const/virt_assist/localrepo/repo
    svccfg -s application/pkg/server setprop pkg/port=80
    svccfg -s application/pkg/server setprop pkg/inst_root=/usr/share/distro_const/virt_assist/localrepo/repo
    svcadm refresh application/pkg/server
    svcadm enable application/pkg/server
    tail -f  /var/svc/log/application-pkg-server:default.log
    [13/Dec/2008:11:43:36] ENGINE Listening for SIGTERM.
    [13/Dec/2008:11:43:36] ENGINE Listening for SIGUSR1.
    [13/Dec/2008:11:43:36] ENGINE Bus STARTING
    [13/Dec/2008:11:43:36] ENGINE Started monitor thread '_TimeoutMonitor'.
    [13/Dec/2008:11:43:36] ENGINE Serving on 0.0.0.0:80
    [13/Dec/2008:11:43:36] ENGINE Bus STARTED
    [13/Dec/2008:11:43:36] INDEX Updating search indices
    [13/Dec/2008:11:43:59] INDEX Search indexes updated and available.
    ls /usr/share/distro_const/virt_assist/localrepo/repo
    catalog file pkg search.dir search.pag trans updatelog
    

    Publish LOCALni into IPS repository

    Important: Investigate, if we need to make some dummy entries for SPARC ???

    cat <<EOF >> /usr/share/distro_const/virt_assist/fastfixes/LOCALni//ni-0.8.11/LICENSE
    This software is licensed under BSD License:
    /*
    * Copyright (c) 2002-2009 Masayuki Murayama.  All rights reserved.
    *
    * Redistribution and use in source and binary forms, with or without
    * modification, are permitted provided that the following conditions are met:
    *
    * 1. Redistributions of source code must retain the above copyright notice,
    *    this list of conditions and the following disclaimer.
    *
    * 2. Redistributions in binary form must reproduce the above copyright notice,
    *    this list of conditions and the following disclaimer in the documentation
    *    and/or other materials provided with the distribution.
    *
    * 3. Neither the name of the author nor the names of its contributors may be
    *    used to endorse or promote products derived from this software without
    *    specific prior written permission.
    *
    * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
    * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
    * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
    * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
    * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
    * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
    * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
    * OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
    * AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
    * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
    * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
    * DAMAGE.
    */
    EOF
    eval `pkgsend  -s http://localhost:80 open LOCALni@0.8.11`
    pkgsend -s http://localhost:80 add set name=variant.arch value=i386
    pkgsend -s http://localhost:80 add set name=variant.zone value=global value=nonglobal variant.arch=i386
    pkgsend -s http://localhost:80 add set name=publisher value=VirtualGuru.localhost
    pkgsend -s http://localhost:80 add file /usr/share/distro_const/virt_assist/fastfixes/VGURUni/ni-0.8.11/i386/ni \
    group=sys mode=0755 opensolaris.zone=global owner=root path=kernel/drv/ni reboot_needed=true variant.arch=i386 variant.opensolaris.zone=global
    pkgsend -s http://localhost:80 add file /usr/share/distro_const/virt_assist/fastfixes/VGURUni/ni-0.8.11/i386/dp8390 \
    group=sys mode=0755  opensolaris.zone=global owner=root path=kernel/drv/dp8390 reboot_needed=true variant.arch=i386 variant.opensolaris.zone=global
    pkgsend -s http://localhost:80 add file /usr/share/distro_const/virt_assist/fastfixes/VGURUni/ni-0.8.11/amd64/ni \
    group=sys mode=0755 opensolaris.zone=global owner=root path=kernel/drv/amd64/ni reboot_needed=true variant.arch=i386 variant.opensolaris.zone=global
    pkgsend -s http://localhost:80 add file /usr/share/distro_const/virt_assist/fastfixes/VGURUni/ni-0.8.11/amd64/dp8390 \
    group=sys mode=0755 opensolaris.zone=global owner=root path=kernel/drv/amd64/dp8390 reboot_needed=true variant.arch=i386 variant.opensolaris.zone=global
    pkgsend -s http://localhost:80 add depend fmri=SUNWckr@0.5.11-0.111 type=require variant.arch=i386
    pkgsend -s http://localhost:80 add driver alias=pci1106,926 alias=pci10ec,8029 alias=pci1050,940 alias=pci1050,5a5a \
    alias=pci11f6,1401 alias=pci8e2e,3000 alias=pci4a14,5000 alias=pci10bd,e34 clone_perms="ni 0666 root sys" name=ni \
    perms="* 0666 root root" variant.arch=i386
    pkgsend -s http://localhost:80 add license /usr/share/distro_const/virt_assist/fastfixes/VGURUni//ni-0.8.11/LICENSE license=VGURUni.copyright variant.arch=i386
    pkgsend -s http://localhost:80 add set name=description value="ni rtl8029 (NE2000) Network Driver by Masayuki Murayama" variant.arch=i386
    pkgsend -s http://localhost:80 add set name=info.classification value=org.opensolaris.category.2008:Drivers/Networking variant.arch=i386
    pkgsend -s http://localhost:80 close
    PUBLISHED
    pkg:/VGURUni@0.8.11,5.11:20090610T105252Z
    

    HW FAST FIX: Update Dnet driver from B113

    6768204 P2 driver/dnet dnet interface takes a long time to resume after plumb/unplumb in Hyper-V virtual machine

    Copy SUNWos86r from Public Nevada build NV113 into /tmp

    /tmp/dnet-fix/SUNWos86r/archive
    7z x none.7z
    /tmp/dnet-fix/SUNWos86r/archive# ls
    none  none.7z
    cpio -idmv < none
    etc/bootrc
    kernel/drv/amd64/dnet
    kernel/drv/amd64/elxl
    kernel/drv/amd64/iprb
    kernel/drv/amd64/pcn
    kernel/drv/dnet
    kernel/drv/dnet.conf
    kernel/drv/elxl
    kernel/drv/elxl.conf
    kernel/drv/iprb
    kernel/drv/iprb.conf
    kernel/drv/pcn
    kernel/drv/pcn.conf
    kernel/drv/sd
    kernel/drv/spwr
    kernel/drv/spwr.conf
    ls -l /kernel/drv/dnet
    -rwxr-xr-x 1 root sys 54920 2008-12-04 13:49 /kernel/drv/dnet
    ls -l /kernel/drv/amd64/dnet
    -rwxr-xr-x 1 root sys 84248 2008-12-04 13:49 /kernel/drv/amd64/dnet
    cp  ./kernel/drv/dnet /kernel/drv/
    cp  ./kernel/drv/amd64/dnet /kernel/drv/amd64/
    ls -l /kernel/drv/dnet
    -rwxr-xr-x 1 root sys 54936 2009-05-15 07:09 /kernel/drv/dnet
    ls -l /kernel/drv/amd64/dnet
    -rwxr-xr-x 1 root sys 84280 2009-05-15 07:09 /kernel/drv/amd64/dnet
    

    Creating “instance” zpool for Live-CD custom data

    1. Create 80MB Virtual Disk as VMDK ESX type

    Note: 64MB is smallest zpool size in OpenSolaris 200906 release

    cat disk.cmd
    vmware-vdiskmanager.exe -c -s 80MB -a lsilogic -t 4 instance.vmdk
    ls -l instance-flat.vmdk instance.vmdk
    -rw-rw-r--   1 root     root     33554432 Jun 12 10:55 instance-flat.vmdk
    -rw-rw-r--   1 root     root         395 Jun 12 10:55 instance.vmdk
    

    2. Start OpenSolaris 20906 VA with this disk connected and create a zpool instance

    echo y |format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c3t0d0 <DEFAULT cyl 30 alt 2 hd 64 sec 32>
    zpool create instance c3t0d0
    zpool list
    NAME       SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    instance    67M   136K  66.9M     0%  ONLINE  -
    zpool get all instance
    NAME      PROPERTY       VALUE       SOURCE
    instance  size           67M         -
    instance  used           136K        -
    instance  available      66.9M       -
    instance  capacity       0%          -
    instance  altroot        -           default
    instance  health         ONLINE      -
    instance  guid           5750477572259957526  default
    instance  version        14          default
    instance  bootfs         -           default
    instance  delegation     on          default
    instance  autoreplace    off         default
    instance  cachefile      -           default
    instance  failmode       wait        default
    instance  listsnapshots  off         default
    

    3. Generate ssh keys and data dir

    su - root
    mkdir /instance/data
    echo "Put your instance data here, they will be in /tmp/data after boot" >/instance/data/README.txt
    ssh-keygen -q -t dsa -f /tmp/sshvmkeydsa -N ""
    ssh-keygen -q -t rsa -f /tmp/sshvmkeyrsa -N ""
    Save keys with scp
    scp /tmp/sshvakeydsa /tmp/sshvmkeydsa.pub your_server:/dir
    scp /tmp/sshvakeyrsa /tmp/sshvmkeyrsa.pub your_server:/dir
    mkdir /instance/.ssh/
    cat /tmp/sshvmkeydsa.pub >/instance/.ssh/authorized_keys
    cat /tmp/sshvmkeyrsa.pub >>/instance/.ssh/authorized_keys
    chmod 0600 /instance/.ssh/authorized_keys
    

    4. Release instance zpool

    cd /
    zpool export -f instance
    

    5 .Convert VMKD virtual disks to other formats

    unzip -l sshvmkeys.virtdisks.zip | awk '{print $4}'
    sshvmkeys.virtdisks/
    sshvmkeys.virtdisks/README.txt
    sshvmkeys.virtdisks/createme.txt
    sshvmkeys.virtdisks/sshvmkeydsa.key
    sshvmkeys.virtdisks/sshvmkeydsa.pub
    sshvmkeys.virtdisks/sshvmkeyrsa.key
    sshvmkeys.virtdisks/sshvmkeyrsa.pub
    sshvmkeys.virtdisks/Microsoft-VHD/
    sshvmkeys.virtdisks/Microsoft-VHD/instance.vhd.zip
    sshvmkeys.virtdisks/Others-RAW/instance.raw.zip
    sshvmkeys.virtdisks/Parallels-HDD/instance.hdd.zip
    sshvmkeys.virtdisks/QEMU-KVM-QCOW/instance.qcow.zip
    sshvmkeys.virtdisks/QEMU-KVM-QCOW/instance.qcow2.zip
    sshvmkeys.virtdisks/VirtualBox-VDI/instance.vdi.zip
    sshvmkeys.virtdisks/VMware-VMDK-SCSI-ESX/instance.vmdk.zip
    sshvmkeys.virtdisks/VMware-Wrk-VMDK-IDE/instance.vmdk.zip
    sshvmkeys.virtdisks/VMware-Wrk-VMDK-SCSI/instance.vmdk.zip
    
    About these ads

4 Comments »

  1. […] I will use Live-CD media from OpenSolaris 200906 Virtualization Assistant Live-CD Proof of Concept […]

    Pingback by Part 12: Glassfish V3 Pet Catalog sample DEMO in VM Template – Clean and Convert « Virtual Guru's Blog – Home of Virtualization Workshops — February 1, 2010 @ 5:14 pm

  2. […] OpenSolaris 200906 Virtualization Assistant Live-CD PoC […]

    Pingback by Virtual Machines, Templates and Appliances Builder (Main Page) « Virtual Guru's Blog – Home of Virtualization Workshops — February 1, 2010 @ 5:21 pm

  3. […] I this step will be using a Live CD media and/or Source repository from OpenSolaris 200906 Virtualization Assistant Live-CD Proof of Concept […]

    Pingback by Part 8: Glassfish V3 Pet Catalog sample DEMO in VM Template – Cleaning « Virtual Guru's Blog – Home of Virtualization Workshops — February 7, 2010 @ 5:31 pm

  4. […] We will use a AI on Media custom AI manifest installation AI on Media in B130 , feature which we prototype during 2009.06 installation with Bootable AI Live-CD […]

    Pingback by OpenSolaris JeOS Prototype (Part 19: B130 JeOS First Touch – AI installation) « Virtual Guru's Blog – Home of Virtualization Workshops — February 7, 2010 @ 7:31 pm


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Shocking Blue Green Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: