Article 5BDWQ Nextcloud - RPi 4- mdadm unable to remove raid 5 array

Nextcloud - RPi 4- mdadm unable to remove raid 5 array

by
g4njawizard
from LinuxQuestions.org on (#5BDWQ)
Hi everyone,

I am currently trying to create a new raid 5 array, because I experienced huge performance issues when running updates on nextcloud. It takes almost a complete day to create a backup, download files, extracting and replacing files.
I already used nextcloudpi and the normal server nextcloud version on my raspberry. Both lack in terms of reading and writing on the disks. But when I upload and download files from the cloud it's running smooth.

On my Raspberry Pi 4 4GB, I have 4 SATA Disks with each 2TB. First time I created an array, I used Raid 5 with a spare disk. But this conestellation persists every time, even when I delete every information. I want this time to use all 4 Disks without a spare part.

I already zero'd all 4 Disks. It took almost ~6-8 hours for each disk.
But the write speed was IMO ok. I had a write speed on each disk from round about 130-140Mb/s.

What I've also tried:

Code:OK root@ncloud:~# mdadm --stop /dev/md127
mdadm: stopped /dev/md127
OK root@ncloud:~# mdadm --remove /dev/md127
mdadm: error opening /dev/md127: No such file or directory
Error root@ncloud:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices: <none>
OK root@ncloud:~# mdadm --examine --brief --scan --config=partitions
ARRAY /dev/md/vol1 metadata=1.2 UUID=46c6092a:69f8d8f4:23bfc213:a9fb2222 name=ncloud:vol1
OK root@ncloud:~# mdadm --zero-superblock /dev/sda /dev/sdb /dev/sdc /dev/sdd
OK root@ncloud:~# wipefs -af /dev/sda /dev/sdb /dev/sdc /dev/sdd
OK root@ncloud:~# mdadm --examine --brief --scan --config=partitionsBut still the same Array with spare disk is keep coming back.

Code:root@ncloud:~# mdadm --create --verbose --chunk=128 /dev/md/vol1 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: size set to 1953382400K
mdadm: automatically enabling write-intent bitmap on large array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/vol1 started.
OK root@ncloud:~# mdadm --detail /dev/md/vol1
/dev/md/vol1:
Version : 1.2
Creation Time : Wed Dec 9 08:09:29 2020
Raid Level : raid5
Array Size : 5860147200 (5588.67 GiB 6000.79 GB)
Used Dev Size : 1953382400 (1862.89 GiB 2000.26 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Wed Dec 9 08:09:30 2020
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 128K

Consistency Policy : bitmap

Rebuild Status : 0% complete

Name : ncloud:vol1 (local to host ncloud)
UUID : 46c6092a:69f8d8f4:23bfc213:a9fb2222
Events : 2

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
4 8 48 3 spare rebuilding /dev/sddlatest?d=yIl2AUoC8zA latest?i=JsRblTLvX-U:RD3-iXlOEP8:F7zBnMy latest?i=JsRblTLvX-U:RD3-iXlOEP8:V_sGLiP latest?d=qj6IDK7rITs latest?i=JsRblTLvX-U:RD3-iXlOEP8:gIN9vFwJsRblTLvX-U
External Content
Source RSS or Atom Feed
Feed Location https://feeds.feedburner.com/linuxquestions/latest
Feed Title LinuxQuestions.org
Feed Link https://www.linuxquestions.org/questions/
Reply 0 comments