Black screen after video ends

A problem I’ve had a few times lately is that something odd happens to my Vero 4K+ at the end of a video.

I get a black screen on the TV and pressing buttons on the remote doesn’t bring back the picture.

I can still SSH into it at that point. If I try to reboot it from the command line, this always freezes in mid-reboot, generally with a message about being unable to unmount an autofs share. (See below).

Actually power-cycling the device gets everything back to normal.

It happens rarely enough and unpredictably enough that I can’t enable debug logging and then reproduce the issue.

I did try doing grab-logs -A this time. Result: https://paste.osmc.tv/opigeqaciv

Screenshot of what happens when I try to reboot from the command-line:

(It stays frozen like this indefinitely, and there’s a long pause before the “failed” line comes up).

Any ideas?

  • Fix your share
  • Does this happen with the 2D build?

I’m using a 2D build at the moment, although I do have stretch-devel as an available source.

What do you suppose might be wrong with the share? If it were a problem at the server end, I don’t understand why power-cycling the Vero would invariably make everything work again (which it does), and you’d think I might occasionally see issues in the middle of the video, too. It could be that the server is going into sleep mode, I suppose, but waking it up (by using a wake-on-LAN app on my phone) doesn’t un-freeze the Vero, and rebooting from the command-line still fails after waking up the server.

@angry.sardine, I’ve seen this with my Vero and NAS. In my case I know that it’s an issue with the NAS. It normally happens after a power glitch. What seems to be happening is the NAS stops responding in a way that confuses autofs into thinking that the share is actually still good. When this happens I also see it happen on the same share on my laptop. A reboot of the Vero and laptop clears the problem, or a reboot of the NAS also fixes it.

The next time it happens you could try this:

sudo umount /mnt/nicolas-pc/VideoE
sudo systemctl restart autofs

That should force the share to be unmounted. Then restarting autofs will mount it again.

I’ve never figured out why my NAS does this. I’ve always put it off to it being a cheap WD MyCloud with 6 (yes 6!) drives connected via a USB hub.

1 Like

I will give that a try next time.

Given that I’ve got five different autofs shares on the same server, can I write a short script of some kind to unmount them all? I know how to use nano, but I’m not clear what one has to do to execute a file that contains a sequence of commands.

When this happens to me, it’s normally just one share. They way autofs works it only mounts the share on demand, so normally only one of your 5 shares would be in use.

But yes, you could do a script that would do the unmount the then restart autofs.

A very simple (totally untested by me) script would be:

 #!/bin/bash

sudo umount /mnt/nicolas-pc/QBTE
sudo umount /mnt/nicolas-pc/QBTF
sudo umount /mnt/nicolas-pc/Interim
sudo umount /mnt/nicolas-pc/VideoE
sudo umount /mnt/nicolas-pc/VideoF
sudo systemctl restart autofs

I’d suggest saving the file in ~osmc/bin as fixmount.sh Once you create it then

chmod +x fixmount.sh

You can then just run fixmount.sh

The script will probably show errors on the shares that are not actually currently mounted.

1 Like

You should be able to use this command to force an unmount of all of your nfs shares at the same time.
sudo umount -adf -t nfs

1 Like