google-compute-engine"/>
  • 14
name

A PHP Error was encountered

Severity: Notice

Message: Undefined index: userid

Filename: views/question.php

Line Number: 191

Backtrace:

File: /home/prodcxja/public_html/questions/application/views/question.php
Line: 191
Function: _error_handler

File: /home/prodcxja/public_html/questions/application/controllers/Questions.php
Line: 433
Function: view

File: /home/prodcxja/public_html/questions/index.php
Line: 315
Function: require_once

My instance on Google Compute Engine is not booting up properly, which I am unable to SSH it anyways. I have a lot of stuff on the instance. How can I recover that?

Logs are as following. When I try if it is on network from Windows I get the nat IP but I am unable to SSH which was working fine. Neither can I SSH from the browser.

[    0.519999] md: autorun ...
[    0.520794] md: ... autorun DONE.
[    0.521761] VFS: Cannot open root device "sda1" or unknown-block(0,0): error -6
[    0.523744] Please append a correct "root=" boot option; here are the available partitions:
[    0.525886] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[    0.527829] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.19.0-25-generic #26~14.04.1-Ubuntu
[    0.529875] Hardware name: Google Google, BIOS Google 01/01/2011
[    1.656059] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)

During the migration from trial to paid user, I lost my running instance with similar symptoms. However, and in my case, the "flag" auto-delete the disk when deleting the instance was checked which was preventing me from using the method described above. So here's how I was able to recover my drive:

First and foremost, do not delete your corrupted instance. You will need it.

  1. From your main console,identify the name of the disk corresponding to the corrupted instance: "gcloud compute disks list"
  2. Create a snapshot of the drive that seems corrupted: gcloud compute disks snapshot my-disk-1 --snapshot-names snapshot-1
  3. Create and boot an instance from the newly created snapshot (make sure to turn off the auto-delete flag when creating the new instance). Chances are, the newly created instance will run into the exact same boot issue as the original one. That's okay this time, because you will now be able to shutdown and delete that instance without losing the drive which should now be available when listing with gcloud compute disks list (say: new_disk).
  4. Once the instance has been deleted, you should be left with one new mountable drive. For that, create a 3rd instance with similar OS characteristics as the original one.
  5. From the Google Cloud console, and using gcloud command, attach the drive to that new instance (say ubuntu-trusty-3). gcloud compute instance attach-disk ubuntu-trusty-3 --disk DISK --device-name new_disk You should now have 2 drives available on that instance.

$ sudo blkid /dev/sda1: LABEL="cloudimg-rootfs" UUID="87f65d22-c9a9-428c-b1ab-b4ad9f8e4c05" TYPE="ext4" /dev/sdb1: LABEL="cloudimg-rootfs" UUID="87f65d22-c9a9-428c-b1ab-b4ad9f8e4c05" TYPE="ext4"

  1. Reboot that instance if the drive does not show up (sudo blkid).

Here's how it looked on my: dashboard

In my case, to my biggest surprise the kernel booted from the recovered drive (gmap-server) and I was back in business. I have no idea how the kernel picked this one versus the one created at the creation of the instance. If anyone knows, please chime in here.

  • 6
Reply Report

It might be an issue with the /etc/fstab where the UUID doesn't match the disk UUID as such, OS is unable to mount the disk with the right UUID.

To make necessary changes to /etc/fstab on the boot disk you can follow the steps below:

  1. Delete your instance WITHOUT deleting your boot disk (It's a good idea to take snapshot of your disk before deleting the instance, so that you have a backup to recover).
  2. Create temporary instance and attach boot disk in question as a secondary disk.
  3. SSH to this instance and run $ sudo blkid to get the UUID of secondary disk.
  4. Mount the secondary disk.
  5. Now you can modify the /DISK-MOUNT-PATH/etc/fstab on secondary disk.
  6. Save the changes and shutdown the instance.
  7. Once done you can delete the temporary instance and create a new instance with your original disk.

I hope that helps.

  • -1
Reply Report
      • 1
    • The error message indicates that the kernel is unable to mount the root partition. Therefore this can't be a problem with /etc/fstab, because that file is on the root partition.

Warm tip !!!

This article is reproduced from Stack Exchange / Stack Overflow, please click

Trending Tags