I wanted to add an attached hardware RAID to my existing setup so I could have a dedicated partition to back up the various family computers around the house. This RAID would be connected via USB so that if a recovery is necessary I could simply disconnect from the NAS and restore locally over USB which would be much faster than over the network.
I figured it was also a good time to upgrade the storage of my primary disks. So the goal was to use my old disks for the external hardware RAID and the new larger disks as my primary NAS storage. Here’s a summary of my setup:
A software RAID 1 is being used for the NAS storage
A hardware RAID 1 is being used for the external USB storage
The hardware RAID will be formatted as NTFS to allow for USB interface with different OS types
I would have preferred the software RAID for the external drive, but it would only work on Linux.
Install the new larger capacity drives in the external enclosure with JBOD enabled and create a software RAID using mdadm ( I formatted my drives using mkfs.ext4)
Plug the external enclosure into NAS
rsync the old partition to the new partition
Login to OMV and move existing shared folders to the new partition
Power down NAS and remove the old drives
Insert the new drives into NAS, power up and verify operation.
Insert old drives into the external enclosure and follow the instructions in the manual to enable RAID 1 (see instructions at the end of this post)
Fix docker containers. Docker could not see my containers and images even though OMV reported the default location as the new shared folder path. Reinstalling docker and resetting the default path fixed this issue.
Edit 20190714: Reinstalling docker did not completely resolve the issue since the containers also appeared to contain references to the old volume. To fix this i removed the Portainer container and reestablished it. Then I used Portainer to “recreate” the containers which appeared to be the most trivial approach.
My Synology Diskstation recently took a crap, but thats ok because it was OLD (DS211j). I’ve really just been waiting for this to happen. I knew I wanted to build my next one and it would be open source so I started with the software and worked backwards. I also knew I wanted to go embedded which would help drive the cost down. After all, embedded processors are getting much more powerful these days.
OpedMediaVault (OMV) looked like a decent choice for the distro and of the available embedded systems the Pine64 offering looked like one of the bestoptions, specifically the Rock64Pro. Here are some of the things I really liked about this board:
Amazing CPU specs and decent amount of on-board memory
PCIe support…this is how synology implements their dual bays (no USB here)
They already had an available offering of NAS related mechanical parts
All of these are available from Pine64 (except for the hard drives) which I highly recommend as the source. I already had an SD card I purchased from Amazon.
Pine64 build instructions are available here although they are not great.
Install the OS
Pine64 provides pretty good instructions which are available here. They use a custom build of Etcher to simplify installation on an SD card. Since I’m going with an OMV installation choosing the Stretch OpenMediaVault OS Image arm64... was an obvious choice.
The build version available was 0.7.9., but checking the ayufan-rock64 showed the latest pre-release version as being 0.7.11. Luckily the developers make it easy upgrade with instructions provided in the link.
Configure the system
At this point I needed to get minimal services up and running such that my Plex Media Server had its data source back and so my wife could get at her data.
Change the Administrator password
Available in General Settings -> Web Administrator Password
Change the Timezone
Available in Date & Time
Update the system
Its likely the image you are using has out of date packages and OMV software.
The system can updated from the OMV GUI using Update Management.
Available in Notification
Gmail settings are as follows:
Gmail SMTP server address: smtp.gmail.com.
Gmail SMTP username: Your full Gmail address (e.g. firstname.lastname@example.org)
Gmail SMTP password: Your Gmail password.
Gmail SMTP port (TLS): 587.
Gmail SMTP port (SSL): 465.
Gmail SMTP TLS/SSL required: yes.
Be sure to send a test email.
If two-factor authentication is enabled the password must be generated via gmail settings. Instructions are provided here
Going into this I really didnt know anything about my disks. Running fdisk -l showed me that both disks were present as well as the software raid that Synology had created for me. Under these conditions an attempt to mount one of the disks would invoke an error telling me the disk was part of a raid. Trying the same thing with the raid would tell me the raid was part of an LVM.
Gives the mount point name
I initially used native linux tools to inspect the LVM such that I could mount the disks and see my data. Here is a good tutorial on that with the key commands listed below.
Mounted LVM to /mnt using
mount /dev/ /mnt
At this point I could cd to the disk and see my data. Hooray! No data loss.
If you want to know what file system type is being used, while the disk is mounted run:
If you want to know the disk allocations, cd to the root directory and run:
du -d1 -h
I recognized that I COULD have set up the mount myself in /etc/fstab or using autofs, but that sort of defeats the purpose of running NAS software. Luckily there is a plugin to support LVM’s which allowed the disk set to be automatically recognized. After that all I needed to do was mount it through OMV.
After some time of running I started getting degraded RAID notifications that one of the disks had failed. I was skeptical that was actually the case so rather than replace the disk, I used the OMV interface to do the following:
Remove the failed disk from the RAID
Format the disk
Recover the disk
Syncing will take some time if you have a lot of data!
Following these steps allowed me to fully recover the RAID….I think I got lucky though.
I also ended up with a SparesMissing event. According to this post the spares flag was set to 1. Setting this to zero resolved the issue.
Creating users with home shares
This part was amazing. It turns out that if I created users with the same user name as with my Synology, and specified the same home base path, my existing directories would be recognized and I wouldn’t have to move any data around.
Worked like a charm!
Setting up Fan support
The only current solution is provided by tuxd3v/ats which provides simplistic control of the fan based on CPU temperature. With the current stable version of the ayufan image (0.7.9) there is no fan support, so you will need to upgrade. The reason for this is that /sys/class/hwmon/hwmon0/pwm1 does not exist in earlier verions, but is required.
Alternatively, you can write an integer between about 150 and 255 to that file to control the fan speed. Lower values appear to have no effect on fan speed.
Run the following command as root to allow full fan speed 24/7:
Then add the following line and save:
@reboot echo "255" > /sys/class/hwmon/hwmon0/pwm1
Setting up SMB shares
SMB is the current method for sharing data with my Plex server. With the base file system in place on my RAID, all that was necessary was to create a shared folder that pointed to my media directory and then tell the SMB service to expose that to the network. I also made sure my user account had access to it such that the credentials on the Plex media server were still valid. Another example where using the previous Synology credentials allowed for a smooth transition.
Setting up Time Machine
This can be accomplished via the Services->Apple Filing menu. The main steps include:
Enable the setting
Create a share with time machine support enabled
Give users access to this share
You should not be able to select the NAS as a destination in Time Machine preferences.
Installing extra plugins
I never intended for this build to be a server replacement. For example I would not run Plex on it, but rather just use it for housing media. I may convert to it one day, but I’ve been happy with the performance of my existing server for CPU intensive tasks like Plex.
Installed the openmediavault-sensors plugin, but it failed for this particular architecture. I will be looking into this further as it would be a really nice to have feature.
This was more for the fun of it. Its definitely convenient if you need a quick terminal from a mobile phone.
The plugin failed on install, but seem to work after reboot.
OMV provides a docker plugin which installs the docker engine as well as a docker management GUI. I played around with the GUI, but frankly was underwhelmed having used Portainer before and thus decided to go that direction. I still used it to manage the Docker engine.
I also created a docker user such that the base path for Portainer had a location on the RAID to create images and volumes.
CAUTION: If you don’t do this Portainer will use the SD card image for storage which is something you don’t really want. This also prevents portability of the docker environment if you decide to swap or want to modify the OS.
This was mainly intended to support printing from iPhones. Printing from computers would not use this method.
After some trial and error I discovered this issue with the CUPS package that OMV provides and it turns out its being deprecated in favor of docker. However, I played with several docker implementations and could not get them to show up on my iOS device. After some digging I found this post on setting up AirPrint with a RaspberryPi which I installed locally on the NAS. I did not need to install the printer files recommended at the end of the post (although I think they’re useful if using USB). Once my printer was added it showed up on my iPhone.
TODO: Create a docker container with the methods used in the tutorial
I’m not going to go into details here about Portainer, but I will say that for web services if you can install a “stack” then that is the way to go. The main reason for this is that it decouples the database from the application and easily setups up communication between the two docker containers such that they are on the same network.
For whatever reason I could not get the the DockerHub or github docker-compose snippets to work as a stack. Even though I provided the environment variables to automatically setup the database and service, they just didn’t seem to execute properly. As a trouble shooting step I decided to create a stack that included Adminer and for whatever reason that seemed to work even though I didn’t do anything special except log into Adminer. Here is a gist I created that ended up working for me.
No assembly instructions were provided. Assembly was fairly intuitive, but still questionable at a few points along the way.
Doesn’t support 3.5 inch disks very well. The problem is that the cable routing becomes clunky and they butt up against the fan when installed.
Light pipe is useless. I understand what the designer was trying to achieve here but it really didn’t pass useful light through.
Random hole on the front. If you look at the Pine64 wiki it shows a silk screen denoting that its for an IR receiver. The case I received did not have that so it wasn’t obvious until reading the wiki.
Make sure you buy their fan. If you think that your random fan laying around will work then think again. It needs a fairly long cable and has a connector that just might not match with yours. A 100mil female header will work though if you that laying around.
HDD Led’s would be an amazing addition
SATA PCIe Card Issue
The SATA connectors on the PCIe card aren’t great because they do not allow the cables to lock. After some digging I realized that I was sent an older version. The picture on their website shows a revision B while the one I was sent has no apparent revision. The rev B also has locking connectors for the SATA cables.