# zpool status -x pool: zeepool state: DEGRADED status: One or more devices could not be opened.

Sufficient replicas exist for the pool to continue functioning in a degraded state. see: scrub: scrub completed after 0h2m with 1 errors on Tue Mar 11 2008 config: NAME STATE READ WRITE CKSUM rpool DEGRADED 0 0 9 c2t0d0s0 DEGRADED 0 0 9 errors: Permanent errors have been detected in the following files: /mnt/root/lib/amd64/1 # fmdump TIME UUID SUNW-MSG-ID Aug 18 .1940 940422d6-03fb-4ea0-b012-aec91b8dafd3 ZFS-8000-D3 Aug 21 .5264 692476c6-a4fa-4f24-e6ba-8edf6f10702b ZFS-8000-D3 Aug 21 .7312 45848a75-eae5-66fe-a8ba-f8b8f81deae7 ZFS-8000-D3 # fmstat module ev_recv ev_acpt wait svc_t %w %b open solve memsz bufsz cpumem-retire 0 0 0.0 0.0 0 0 0 0 0 0 disk-transport 0 0 0.0 55.9 0 0 0 0 32b 0 eft 0 0 0.0 0.0 0 0 0 0 1.2M 0 fabric-xlate 0 0 0.0 0.0 0 0 0 0 0 0 fmd-self-diagnosis 0 0 0.0 0.0 0 0 0 0 0 0 io-retire 0 0 0.0 0.0 0 0 0 0 0 0 snmp-trapgen 0 0 0.0 0.0 0 0 0 0 32b 0 sysevent-transport 0 0 0.0 4501.8 0 0 0 0 0 0 syslog-msgs 0 0 0.0 0.0 0 0 0 0 0 0 zfs-diagnosis 0 0 0.0 0.0 0 0 0 0 0 0 zfs-retire 0 0 0.0 0.0 0 0 0 0 0 0 # fmadm config MODULE VERSION STATUS DESCRIPTION cpumem-retire 1.1 active CPU/Memory Retire Agent disk-transport 1.0 active Disk Transport Agent eft 1.16 active eft diagnosis engine fabric-xlate 1.0 active Fabric Ereport Translater fmd-self-diagnosis 1.0 active Fault Manager Self-Diagnosis io-retire 2.0 active I/O Retire Agent snmp-trapgen 1.0 active SNMP Trap Generation Agent sysevent-transport 1.0 active Sys Event Transport Agent syslog-msgs 1.0 active Syslog Messaging Agent zfs-diagnosis 1.0 active ZFS Diagnosis Engine zfs-retire 1.0 active ZFS Retire Agent # fmdump -e V TIME CLASS Aug 18 2008 .186159293 open_failed nvlist version: 0 class = open_failed ena = 0xd3229ac5100401 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x4540c565343f39c2 vdev = 0xcba57455fe08750b (end detector) pool = whoo pool_guid = 0x4540c565343f39c2 pool_context = 1 pool_failmode = wait vdev_guid = 0xcba57455fe08750b vdev_type = disk vdev_path = /dev/ramdisk/rdx parent_guid = 0x4540c565343f39c2 parent_type = root prev_state = 0x1 __ttl = 0x1 __tod = 0x48aa22b3 0xb1890bd This error means that the disk slice doesn't have any disk space allocated to it or possibly that a Solaris fdisk partition and the slice doesn't exist on an x86 system.

action: Attach the missing device and online it using 'zpool online'. Use the format utility to allocate disk space to a slice.

warning failed while updating the boot sectors for disk1 partition1-67warning failed while updating the boot sectors for disk1 partition1-55warning failed while updating the boot sectors for disk1 partition1-61

* Import the pools one-by-one, skipping the pools that are having issues, as described in the fmdump output.

# zpool list pool NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 16.8G 76.5K 16.7G 0% ONLINE - # zpool replace pool c1t16d0 c1t1d0 # zpool replace pool c1t17d0 c1t2d0 # zpool list pool NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 16.8G 88.5K 16.7G 0% ONLINE - # zpool set autoexpand=on pool # zpool list pool NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 68.2G 117K 68.2G 0% ONLINE -# zpool upgrade -v This system is currently running ZFS pool version 10.

The following versions are supported: VER DESCRIPTION --- -------------------------------------------------------- 1 Initial ZFS version 2 Ditto blocks (replicated metadata) 3 Hot spares and double parity RAID-Z 4 zpool history 5 Compression using the gzip algorithm 6 bootfs pool property 7 Separate intent log devices 8 Delegated administration 9 refquota and refreservation properties 10 Cache devices Review the following supported ZFS and zones configurations.

These configurations are upgradeable and patchable. This ZFS zone root configuration can be upgraded or patched.

See the ZFS Administration Guide for information about supported zones configurations that can be upgraded or patched in the Solaris 10 release.

If you need to recover the root password or some similar problem that prevents successful login in a ZFS root environment, you will need to boot failsafe mode on a Solaris 10 system or boot from alternate media, depending on the severity of the error.Select either of the following recovery methods: The best way to change the active boot environment is to use the luactivate command.During the boot process, each pool must be opened, which means that pool failures might cause a system to enter into a panic-reboot loop.In order to recover from this situation, ZFS must be informed not to look for any pools on startup.These actions cause ZFS to forget that any pools exist on the system, preventing it from trying to access the bad pool causing the problem.If you have multiple pools on the system, do these additional steps: * Determine which pool might have issues by using the fmdump -e V command to display the pools with reported fatal errors.