Discussion:
[pfSense] ZFS on 2.4.2
Walter Parker
2018-02-21 17:23:02 UTC
Permalink
Hi,

I have 2.4.2 installed on an SG-2220 from Netgate [nice box]. I just bought
a 6TB powered USB drive from Costco and it works great (the drive has its
own power supply and a USB hub). I want to use it take ZFS backups from my
home server.

I edited /boot/loader.conf.local and /etc/rc.conf.local to load ZFS on boot
and created a pool and a file system. That worked, but the memory ran low
so I restricted the ARC cache to 1G to keep a bit more memory free and
rebooted. When the system rebooted it did not remount the pool (and
therefore the file system) because the pool what marked as in use by
another system (itself). That means that the pool was not properly
exported/umounted at shutdown.

Taking a quick look a rc.shutdown, I notice that it calls a customized
pfsense shutdown script at the beginning and then exits. Is there a good
place in the configuration where I can put/call the proper zfs shutdown
script so that the pool is properly stopped/exported so that it imports
correctly on boot?


Walter
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D. Brandeis
Vick Khera
2018-02-22 14:32:30 UTC
Permalink
You don't need to export the pool on shutdown. Even an unclean shutdown
should survive automatically on the reboot.

I can't think of a reason ZFS would fail like you describe.
Post by Walter Parker
Hi,
I have 2.4.2 installed on an SG-2220 from Netgate [nice box]. I just bought
a 6TB powered USB drive from Costco and it works great (the drive has its
own power supply and a USB hub). I want to use it take ZFS backups from my
home server.
I edited /boot/loader.conf.local and /etc/rc.conf.local to load ZFS on boot
and created a pool and a file system. That worked, but the memory ran low
so I restricted the ARC cache to 1G to keep a bit more memory free and
rebooted. When the system rebooted it did not remount the pool (and
therefore the file system) because the pool what marked as in use by
another system (itself). That means that the pool was not properly
exported/umounted at shutdown.
Taking a quick look a rc.shutdown, I notice that it calls a customized
pfsense shutdown script at the beginning and then exits. Is there a good
place in the configuration where I can put/call the proper zfs shutdown
script so that the pool is properly stopped/exported so that it imports
correctly on boot?
Walter
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D. Brandeis
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
Walter Parker
2018-03-01 06:49:57 UTC
Permalink
Forgot to CC the list.
Thank you for the backup script.
By my calculations, 2G should be enough. If I limit the ARC cache to 1G,
that leaves 1G for applications & kernel memory. As I'm not serving the 6TB
drive up as a file server, but using it for one specific task (to receive
the backups from one host) I figure that I don't need lots of memory. ZFS
as a quick file server or busy server needs lots of memory to be quick.
I've seen testing showing ZFS doing fast file copies on as little as 768M
total system after proper memory tuning.
I need ZFS because it is the only file system that can receive incremental
ZFS snapshots and apply them. I have not setup the ZFS backup software yes,
so I'm just using rsnapshot. First time it ran, it filled all 1G of the
cache. I rebooted the firewall afterwards and now ZFS with 60-100M of usage
(the amount of data that rsync updates on a daily basis is pretty small).
Right now, the data from the other server is ~8.8G, compressed to 1.7G with
lz4.
When I get the full backup running, I will be ~1.5TB in size. ZFS
snapshots should be pretty small and quick (as it can send just the data
that was updated without having to walk the entire filesystem). An rsync
backup would have to walk the whole system to find all of the changes. Most
of the data on the system doesn't change (as it is a media library).
I'll post back more results if people are interested, after I get the
backup software working (I'm thinking about using ZapZend).
Walter
I feel like I'm late in responding to this, but I have to say that 2GB of
RAM doesn't seem like nearly enough for a 6TB zfs volume. ZFS is great in
a lot of ways, but is a RAM consuming monster. For something RAM limited
like the 2220 I'd use a different, simpler file format. Then I'd use rsync
based snapshots.
Here's my personal backup script. :-) I haven't tried it FROM pfsense,
but I've used it to back up pfsense.
ED.
Post by Walter Parker
Hi,
I have 2.4.2 installed on an SG-2220 from Netgate [nice box]. I just
bought
Post by Walter Parker
a 6TB powered USB drive from Costco and it works great (the drive has
its
Post by Walter Parker
own power supply and a USB hub). I want to use it take ZFS backups from
my
Post by Walter Parker
home server.
I edited /boot/loader.conf.local and /etc/rc.conf.local to load ZFS on
boot
Post by Walter Parker
and created a pool and a file system. That worked, but the memory ran
low
Post by Walter Parker
so I restricted the ARC cache to 1G to keep a bit more memory free and
rebooted. When the system rebooted it did not remount the pool (and
therefore the file system) because the pool what marked as in use by
another system (itself). That means that the pool was not properly
exported/umounted at shutdown.
Taking a quick look a rc.shutdown, I notice that it calls a customized
pfsense shutdown script at the beginning and then exits. Is there a good
place in the configuration where I can put/call the proper zfs shutdown
script so that the pool is properly stopped/exported so that it imports
correctly on boot?
Walter
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D.
Brandeis
Post by Walter Parker
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D. Brandeis
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D. Brandeis
Vick Khera
2018-03-01 16:34:16 UTC
Permalink
Here's my simple backup script function. Just stick it into a /bin/sh
script (should work in bash too) and call it once per pfSense instance.
I've been using this for years to backup my production firewalls.

pfsense_config()
{
local FWNAME FWURL FWPASS CSRF CSRF2 COOKIEFILE PFDATE
FWNAME="$1"
FWPASS="$2"

FWURL="https://${FWNAME}"
COOKIEFILE=`mktemp -t cookies`
PFDATE=`date +%Y%m%d%H%M%S`

printf "Downloading Firewall Config for $FWNAME\n"

CSRF=`curl -k -L -c ${COOKIEFILE} ${FWURL}/diag_backup.php | grep
"name='__csrf_magic'" | head -1 | sed 's/.*value="\(.*\)".*/\1/'`
CSRF2=`curl -k -L -c ${COOKIEFILE} -b ${COOKIEFILE} -d
"login=Login&usernamefld=admin&passwordfld=$FWPASS&__csrf_magic=${CSRF}"
${FWURL}/diag_backup.php | grep "name='__csrf_magic'" | head -1 | sed
's/.*value="\(.*\)".*/\1/'`
curl -k -b ${COOKIEFILE} -d
"Submit=download&donotbackuprrd=checked&__csrf_magic=${CSRF2}" -o
config-$FWNAME-$PFDATE.xml ${FWURL}/diag_backup.php
rm -f ${COOKIEFILE}
}


You call it like this:

pfsense_config firewall.example.com mySecr3tPassword

and it stores the backup XML in a file based on the date and firewall name.
Curtis Maurand
2018-03-06 02:38:32 UTC
Permalink
ZFS is a memory hog.   you need 1 GB of RAM for each TB of disk.
Post by Walter Parker
Forgot to CC the list.
Thank you for the backup script.
By my calculations, 2G should be enough. If I limit the ARC cache to 1G,
that leaves 1G for applications & kernel memory. As I'm not serving the 6TB
drive up as a file server, but using it for one specific task (to receive
the backups from one host) I figure that I don't need lots of memory. ZFS
as a quick file server or busy server needs lots of memory to be quick.
I've seen testing showing ZFS doing fast file copies on as little as 768M
total system after proper memory tuning.
I need ZFS because it is the only file system that can receive incremental
ZFS snapshots and apply them. I have not setup the ZFS backup software yes,
so I'm just using rsnapshot. First time it ran, it filled all 1G of the
cache. I rebooted the firewall afterwards and now ZFS with 60-100M of usage
(the amount of data that rsync updates on a daily basis is pretty small).
Right now, the data from the other server is ~8.8G, compressed to 1.7G with
lz4.
When I get the full backup running, I will be ~1.5TB in size. ZFS
snapshots should be pretty small and quick (as it can send just the data
that was updated without having to walk the entire filesystem). An rsync
backup would have to walk the whole system to find all of the changes. Most
of the data on the system doesn't change (as it is a media library).
I'll post back more results if people are interested, after I get the
backup software working (I'm thinking about using ZapZend).
Walter
I feel like I'm late in responding to this, but I have to say that 2GB of
RAM doesn't seem like nearly enough for a 6TB zfs volume. ZFS is great in
a lot of ways, but is a RAM consuming monster. For something RAM limited
like the 2220 I'd use a different, simpler file format. Then I'd use rsync
based snapshots.
Here's my personal backup script. :-) I haven't tried it FROM pfsense,
but I've used it to back up pfsense.
ED.
Post by Walter Parker
Hi,
I have 2.4.2 installed on an SG-2220 from Netgate [nice box]. I just
bought
Post by Walter Parker
a 6TB powered USB drive from Costco and it works great (the drive has
its
Post by Walter Parker
own power supply and a USB hub). I want to use it take ZFS backups from
my
Post by Walter Parker
home server.
I edited /boot/loader.conf.local and /etc/rc.conf.local to load ZFS on
boot
Post by Walter Parker
and created a pool and a file system. That worked, but the memory ran
low
Post by Walter Parker
so I restricted the ARC cache to 1G to keep a bit more memory free and
rebooted. When the system rebooted it did not remount the pool (and
therefore the file system) because the pool what marked as in use by
another system (itself). That means that the pool was not properly
exported/umounted at shutdown.
Taking a quick look a rc.shutdown, I notice that it calls a customized
pfsense shutdown script at the beginning and then exits. Is there a good
place in the configuration where I can put/call the proper zfs shutdown
script so that the pool is properly stopped/exported so that it imports
correctly on boot?
Walter
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D.
Brandeis
Post by Walter Parker
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D. Brandeis
--
Best Regards
Curtis Maurand
Principal
Xyonet Web Hosting
mailto:***@xyonet.com
http://www.xyonet.com
Walter Parker
2018-03-06 17:39:33 UTC
Permalink
This post might be inappropriate. Click to display it.
Paul Mather
2018-03-06 18:09:02 UTC
Permalink
Post by Walter Parker
ZFS is a memory hog. you need 1 GB of RAM for each TB of disk.
Curtis, can you provide some more details? I have been testing this for the
last couple of weeks and ZFS doesn't require 1G for each TB to function
(which is the standard meaning of need).
From my direct testing and experience 1G per TB is a rule of thumb for
suggested memory sizing on general purpose servers. Do you have specific
information that violating this rule of thumb will cause functional issues?
To be more blunt, was this a case of drive by nerd sniping or do you know
something that will cause my specific use case to fail at some point in the
future?
The "1G for each TB" sounds like the rule of thumb for when you plan to enable deduplication on a dataset. ZFS deduplication can be a disastrous memory hog (or else completely ruin your performance if you don't have sufficient ARC memory/resources), which is why many people do not enable it unless they've made a serious conscious decision to do so.

I ran ZFS on a 1--2 GB RAM FreeBSD/i386 system for years and it was stable. I have to tune KVM and restrict ARC RAM consumption, but once I did that I had no problems. It's my experience that ZFS is more stable and tested on FreeBSD/amd64.

Cheers,

Paul.
Post by Walter Parker
Walter
Post by Walter Parker
Forgot to CC the list.
Thank you for the backup script.
By my calculations, 2G should be enough. If I limit the ARC cache to 1G,
that leaves 1G for applications & kernel memory. As I'm not serving the 6TB
drive up as a file server, but using it for one specific task (to receive
the backups from one host) I figure that I don't need lots of memory. ZFS
as a quick file server or busy server needs lots of memory to be quick.
I've seen testing showing ZFS doing fast file copies on as little as 768M
total system after proper memory tuning.
I need ZFS because it is the only file system that can receive incremental
ZFS snapshots and apply them. I have not setup the ZFS backup software yes,
so I'm just using rsnapshot. First time it ran, it filled all 1G of the
cache. I rebooted the firewall afterwards and now ZFS with 60-100M of usage
(the amount of data that rsync updates on a daily basis is pretty small).
Right now, the data from the other server is ~8.8G, compressed to 1.7G with
lz4.
When I get the full backup running, I will be ~1.5TB in size. ZFS
snapshots should be pretty small and quick (as it can send just the data
that was updated without having to walk the entire filesystem). An rsync
backup would have to walk the whole system to find all of the changes. Most
of the data on the system doesn't change (as it is a media library).
I'll post back more results if people are interested, after I get the
backup software working (I'm thinking about using ZapZend).
Walter
I feel like I'm late in responding to this, but I have to say that 2GB of
RAM doesn't seem like nearly enough for a 6TB zfs volume. ZFS is great in
a lot of ways, but is a RAM consuming monster. For something RAM limited
like the 2220 I'd use a different, simpler file format. Then I'd use rsync
based snapshots.
Here's my personal backup script. :-) I haven't tried it FROM pfsense,
but I've used it to back up pfsense.
ED.
Post by Walter Parker
Hi,
I have 2.4.2 installed on an SG-2220 from Netgate [nice box]. I just
bought
Post by Walter Parker
a 6TB powered USB drive from Costco and it works great (the drive has
its
Post by Walter Parker
own power supply and a USB hub). I want to use it take ZFS backups from
my
Post by Walter Parker
home server.
I edited /boot/loader.conf.local and /etc/rc.conf.local to load ZFS on
boot
Post by Walter Parker
and created a pool and a file system. That worked, but the memory ran
low
Post by Walter Parker
so I restricted the ARC cache to 1G to keep a bit more memory free and
rebooted. When the system rebooted it did not remount the pool (and
therefore the file system) because the pool what marked as in use by
another system (itself). That means that the pool was not properly
exported/umounted at shutdown.
Taking a quick look a rc.shutdown, I notice that it calls a customized
pfsense shutdown script at the beginning and then exits. Is there a good
place in the configuration where I can put/call the proper zfs shutdown
script so that the pool is properly stopped/exported so that it imports
correctly on boot?
Walter
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D.
Brandeis
Post by Walter Parker
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D. Brandeis
--
Best Regards
Curtis Maurand
Principal
Xyonet Web Hosting
http://www.xyonet.com
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D. Brandeis
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
Peder Rovelstad
2018-03-06 23:51:05 UTC
Permalink
Here's a ZFS tuning guide if you have not seen.
https://wiki.freebsd.org/ZFSTuningGuide

But only goes to v9.



Down the page they ref 2-5GB/TB for dedupe. Free advice, worth every penny
paid!



https://www.freebsd.org/doc/en/books/faq/all-about-zfs.html



My NAS4Free server uses 90% of its 4GB RAM for a 3TB volume, configured with
1.75GB arc_max.



-----Original Message-----
From: List [mailto:list-***@lists.pfsense.org] On Behalf Of Paul Mather
Sent: Tuesday, March 6, 2018 12:09 PM
To: pfSense Support and Discussion Mailing List <***@lists.pfsense.org>
Subject: Re: [pfSense] ZFS on 2.4.2
On Mon, Mar 5, 2018 at 6:38 PM, Curtis Maurand <
ZFS is a memory hog. you need 1 GB of RAM for each TB of disk.
Curtis, can you provide some more details? I have been testing this
for the last couple of weeks and ZFS doesn't require 1G for each TB to
function (which is the standard meaning of need).
From my direct testing and experience 1G per TB is a rule of thumb for
suggested memory sizing on general purpose servers. Do you have
specific information that violating this rule of thumb will cause
functional issues?
To be more blunt, was this a case of drive by nerd sniping or do you
know something that will cause my specific use case to fail at some
point in the future?
The "1G for each TB" sounds like the rule of thumb for when you plan to
enable deduplication on a dataset. ZFS deduplication can be a disastrous
memory hog (or else completely ruin your performance if you don't have
sufficient ARC memory/resources), which is why many people do not enable it
unless they've made a serious conscious decision to do so.



I ran ZFS on a 1--2 GB RAM FreeBSD/i386 system for years and it was stable.
I have to tune KVM and restrict ARC RAM consumption, but once I did that I
had no problems. It's my experience that ZFS is more stable and tested on
FreeBSD/amd64.



Cheers,



Paul.
Walter
Post by Walter Parker
Forgot to CC the list.
On Wed, Feb 28, 2018 at 10:13 PM, Walter Parker <
Thank you for the backup script.
By my calculations, 2G should be enough. If I limit the ARC cache
to 1G, that leaves 1G for applications & kernel memory. As I'm not
serving the 6TB drive up as a file server, but using it for one
specific task (to receive the backups from one host) I figure that
I don't need lots of memory. ZFS as a quick file server or busy
server needs lots of memory to be quick.
I've seen testing showing ZFS doing fast file copies on as little
as 768M total system after proper memory tuning.
I need ZFS because it is the only file system that can receive
incremental ZFS snapshots and apply them. I have not setup the ZFS
backup software yes, so I'm just using rsnapshot. First time it
ran, it filled all 1G of the cache. I rebooted the firewall
afterwards and now ZFS with 60-100M of usage (the amount of data
that rsync updates on a daily basis is pretty small).
Right now, the data from the other server is ~8.8G, compressed to
1.7G with lz4.
When I get the full backup running, I will be ~1.5TB in size. ZFS
snapshots should be pretty small and quick (as it can send just the
data that was updated without having to walk the entire
filesystem). An rsync backup would have to walk the whole system to
find all of the changes.
Post by Walter Parker
Most
of the data on the system doesn't change (as it is a media library).
I'll post back more results if people are interested, after I get
the backup software working (I'm thinking about using ZapZend).
Walter
On Wed, Feb 28, 2018 at 8:54 PM, ED Fochler
I feel like I'm late in responding to this, but I have to say that
2GB of
RAM doesn't seem like nearly enough for a 6TB zfs volume. ZFS is
great in a lot of ways, but is a RAM consuming monster. For
something RAM limited like the 2220 I'd use a different, simpler
file format. Then I'd use rsync based snapshots.
Here's my personal backup script. :-) I haven't tried it FROM
pfsense, but I've used it to back up pfsense.
ED.
On 2018, Feb 21, at 12:23 PM, Walter Parker <
Post by Walter Parker
Hi,
I have 2.4.2 installed on an SG-2220 from Netgate [nice box]. I
just
bought
Post by Walter Parker
a 6TB powered USB drive from Costco and it works great (the drive
has
its
Post by Walter Parker
own power supply and a USB hub). I want to use it take ZFS
backups from
my
Post by Walter Parker
home server.
I edited /boot/loader.conf.local and /etc/rc.conf.local to load
ZFS on
boot
Post by Walter Parker
and created a pool and a file system. That worked, but the memory
ran
low
Post by Walter Parker
so I restricted the ARC cache to 1G to keep a bit more memory
free and rebooted. When the system rebooted it did not remount
the pool (and therefore the file system) because the pool what
marked as in use by another system (itself). That means that the
pool was not properly exported/umounted at shutdown.
Taking a quick look a rc.shutdown, I notice that it calls a
customized pfsense shutdown script at the beginning and then
exits. Is there a good place in the configuration where I can
put/call the proper zfs shutdown script so that the pool is
properly stopped/exported so that it imports correctly on boot?
Walter
--
The greatest dangers to liberty lurk in insidious encroachment by
men of
zeal, well-meaning but without understanding. -- Justice Louis D.
Brandeis
Post by Walter Parker
_______________________________________________
pfSense mailing list
<https://lists.pfsense.org/mailman/listinfo/list>
https://lists.pfsense.org/mailman/listinfo/list
Post by Walter Parker
Post by Walter Parker
Support the project with Gold! <https://pfsense.org/gold>
https://pfsense.org/gold
Post by Walter Parker
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D.
Brandeis
--
Best Regards
Curtis Maurand
Principal
Xyonet Web Hosting
<http://www.xyonet.com> http://www.xyonet.com
_______________________________________________
pfSense mailing list
<https://lists.pfsense.org/mailman/listinfo/list>
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! <https://pfsense.org/gold>
https://pfsense.org/gold
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D. Brandeis
_______________________________________________
pfSense mailing list
<https://lists.pfsense.org/mailman/listinfo/list>
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! <https://pfsense.org/gold>
https://pfsense.org/gold
_______________________________________________

pfSense mailing list

<https://lists.pfsense.org/mailman/listinfo/list>
https://lists.pfsense.org/mailman/listinfo/list

Support the project with Gold! <https://pfsense.org/gold>
https://pfsense.org/gold
Vick Khera
2018-03-07 14:56:36 UTC
Permalink
Post by Peder Rovelstad
Here's a ZFS tuning guide if you have not seen.
https://wiki.freebsd.org/ZFSTuningGuide
But only goes to v9.
You 100% do not want nor need to turn on de-dupe. Especially on a boot
volume of pfSense.
Peder Rovelstad
2018-03-07 15:32:43 UTC
Permalink
Oh, for certain. Lz4 compression is certainly stressful enough (too much
actually) for as low power a device as a SG-2220.

Only posting to fan the flames! :)

-----Original Message-----
From: List [mailto:list-***@lists.pfsense.org] On Behalf Of Vick Khera
Sent: Wednesday, March 7, 2018 8:57 AM
To: pfSense Support and Discussion Mailing List <***@lists.pfsense.org>
Subject: Re: [pfSense] ZFS on 2.4.2
Post by Peder Rovelstad
Here's a ZFS tuning guide if you have not seen.
https://wiki.freebsd.org/ZFSTuningGuide
But only goes to v9.
You 100% do not want nor need to turn on de-dupe. Especially on a boot
volume of pfSense.
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
Peder Rovelstad
2018-03-07 15:36:47 UTC
Permalink
OH, and w/o ECC memory, it's a time bomb.

-----Original Message-----
From: List [mailto:list-***@lists.pfsense.org] On Behalf Of Peder
Rovelstad
Sent: Wednesday, March 7, 2018 9:33 AM
To: 'pfSense Support and Discussion Mailing List' <***@lists.pfsense.org>
Subject: Re: [pfSense] ZFS on 2.4.2

Oh, for certain. Lz4 compression is certainly stressful enough (too much
actually) for as low power a device as a SG-2220.

Only posting to fan the flames! :)

-----Original Message-----
From: List [mailto:list-***@lists.pfsense.org] On Behalf Of Vick Khera
Sent: Wednesday, March 7, 2018 8:57 AM
To: pfSense Support and Discussion Mailing List <***@lists.pfsense.org>
Subject: Re: [pfSense] ZFS on 2.4.2
Post by Peder Rovelstad
Here's a ZFS tuning guide if you have not seen.
https://wiki.freebsd.org/ZFSTuningGuide
But only goes to v9.
You 100% do not want nor need to turn on de-dupe. Especially on a boot
volume of pfSense.
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold

_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
Walter Parker
2018-03-07 19:04:18 UTC
Permalink
Post by Peder Rovelstad
OH, and w/o ECC memory, it's a time bomb.
That is an urban legend. One of original developers of ZFS was interviewed
and asked about the "Scrub of Death", he said that ZFS doesn't fail in that
way. ZFS is no worse than any other file system when running on a system
without ECC. If there is a time bomb, then it exists for all file systems
running on computers without ECC. As this one of multiple backups for the
system, the risks are acceptable.

If you have an actual failure method that makes ZFS worse, I'd love to see
the details. Then I could publish a paper and be "Internet famous."


Walter
Post by Peder Rovelstad
-----Original Message-----
Rovelstad
Sent: Wednesday, March 7, 2018 9:33 AM
Subject: Re: [pfSense] ZFS on 2.4.2
Oh, for certain. Lz4 compression is certainly stressful enough (too much
actually) for as low power a device as a SG-2220.
Only posting to fan the flames! :)
-----Original Message-----
Sent: Wednesday, March 7, 2018 8:57 AM
Subject: Re: [pfSense] ZFS on 2.4.2
Post by Peder Rovelstad
Here's a ZFS tuning guide if you have not seen.
https://wiki.freebsd.org/ZFSTuningGuide
But only goes to v9.
You 100% do not want nor need to turn on de-dupe. Especially on a boot
volume of pfSense.
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D. Brandeis
Vick Khera
2018-03-07 21:47:05 UTC
Permalink
Post by Walter Parker
without ECC. If there is a time bomb, then it exists for all file systems
running on computers without ECC. As this one of multiple backups for the
system, the risks are acceptable.
If you have an actual failure method that makes ZFS worse, I'd love to see
the details. Then I could publish a paper and be "Internet famous.
Yes, this is true. However, other file systems do not offer *any* hint of
telling you when your data is corrupt on the platter like ZFS will. So if
you know you don't have ECC protection, then you should not expect your
data to be protected end to end. If you have ECC and a "regular" file
system, the same is true. You just never know.
Peder Rovelstad
2018-03-07 22:31:04 UTC
Permalink
Post by Peder Rovelstad
That is an urban legend. One of original developers of ZFS was
interviewed
OK, then. Not my data. Best of luck.
Walter Parker
2018-03-08 01:18:49 UTC
Permalink
Post by Peder Rovelstad
Post by Peder Rovelstad
That is an urban legend. One of original developers of ZFS was interviewed
OK, then. Not my data. Best of luck.
I've had other ZFS servers without ECC that have run successfully for
several years. I know the risks and issues. While, yes, servers should run
with ECC, the idea that that that ZFS requires ECC appears to a scare story
to get people to buy ECC hardware. From my research over the last 10 years,
I would say that 98% of the people sharing this information are passing on
a scary story that someone else told them. This is a like the urban legends
that we used to tell around the camp fire. Note, urban legends still get
told and believed. You heard the one about flashing headlights, some people
still tell and believe that story today.

The closest I've seen to a reason for why it matters to ZFS is that it is
one of the few file systems that can actually tell you when your data is
corrupt before as well as after the fact. It solves many data issues and
people seem to have taken that and require that the rest of the system be
as robust as ZFS. When asked to present actual/real data as to why someone
should use UFS instead of ZFS on a non-ECC system, I notice that the
conversation changes from file systems to don't store data on systems that
don't use ECC. Can anyone show why my solution should switch file systems
(given that I'm keeping my existing hardware) without changing the subject?
I've read many of the scare stories from FreeNAS and they all seem to end
up as a call to authority or a "fine, risk your data" without actually
answering the question.

Does any make a standalone pfSense compatible router that is low power and
not expensive [<$300] with enough ECC [or any ECC] memory?

What would you do on a home budget to get multiple local backups of a
multi-TB file server if you didn't have deep corporate pockets?

I have the Netgate router, it is a real nice box. I don't see why using ZFS
on it in addition to the other systems I have should be an issue, but there
seem to be lots of cooks in the kitchen giving advice without sampling the
product or explaining how they know there is a problem.


Walter
Post by Peder Rovelstad
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D. Brandeis
Vick Khera
2018-03-08 14:19:31 UTC
Permalink
Post by Walter Parker
don't use ECC. Can anyone show why my solution should switch file systems
(given that I'm keeping my existing hardware) without changing the subject?
I've read many of the scare stories from FreeNAS and they all seem to end
up as a call to authority or a "fine, risk your data" without actually
answering the question.
The most important feature I use in ZFS is the snapshots. Combined cleverly
with datasets and quotas, they make for very easy management of disk
resources when needed. The FreeNAS model of boot environments is awesome,
and I hope pfSense takes those up as well. It makes upgrades less stressful
when you can just click a button to revert.

As for the ECC, see this study
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35162.pdf
for example. It is slightly old, but RAM hardware is not that much advanced
since then. Basically, if you have a few gigs of RAM in your machine, it
*will* produce bit errors. There are other studies that back this up too,
and they are more recent.

Personally, I don't understand why any computer, desktop or server, made
these days is without ECC. My desktop has 16GB RAM with room for 16 more.
I'm sure there are flipped bits in some of my work somewhere, but I'll
never really know. If I'm lucky, the flipped bits are on unused sections of
code loaded from the disk into RAM.
Zandr Milewski
2018-03-08 16:10:48 UTC
Permalink
As someone who has spent easily 100 hours troubleshooting, rebuilding,
and restoring UFS based Netgate boxes that have to function in
environments with less-that-datacenter grade power availability, I'll
take "potential corruption in corner cases" over "1 in 4 chance it won't
come back from a power cycle"

*Any* journaled filesystem is an improvement.
Post by Vick Khera
Post by Walter Parker
don't use ECC. Can anyone show why my solution should switch file systems
(given that I'm keeping my existing hardware) without changing the subject?
I've read many of the scare stories from FreeNAS and they all seem to end
up as a call to authority or a "fine, risk your data" without actually
answering the question.
The most important feature I use in ZFS is the snapshots. Combined cleverly
with datasets and quotas, they make for very easy management of disk
resources when needed. The FreeNAS model of boot environments is awesome,
and I hope pfSense takes those up as well. It makes upgrades less stressful
when you can just click a button to revert.
As for the ECC, see this study
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35162.pdf
for example. It is slightly old, but RAM hardware is not that much advanced
since then. Basically, if you have a few gigs of RAM in your machine, it
*will* produce bit errors. There are other studies that back this up too,
and they are more recent.
Personally, I don't understand why any computer, desktop or server, made
these days is without ECC. My desktop has 16GB RAM with room for 16 more.
I'm sure there are flipped bits in some of my work somewhere, but I'll
never really know. If I'm lucky, the flipped bits are on unused sections of
code loaded from the disk into RAM.
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
Vick Khera
2018-03-08 18:12:23 UTC
Permalink
As someone who has spent easily 100 hours troubleshooting, rebuilding, and
restoring UFS based Netgate boxes that have to function in environments
with less-that-datacenter grade power availability, I'll take "potential
corruption in corner cases" over "1 in 4 chance it won't come back from a
power cycle"
*Any* journaled filesystem is an improvement.
Journaling on UFS is just one setting away. Boot single user from USB, then
run "tunefs -j enable /dev/da0" for your boot device da0. Done. I don't
know why FreeBSD does not recommend this for the boot volume, but I think
as long as you never fill up the disk you're ok. I've no had issues with
it.
Walter Parker
2018-03-08 20:00:41 UTC
Permalink
Post by Vick Khera
Post by Zandr Milewski
As someone who has spent easily 100 hours troubleshooting, rebuilding,
and
Post by Zandr Milewski
restoring UFS based Netgate boxes that have to function in environments
with less-that-datacenter grade power availability, I'll take "potential
corruption in corner cases" over "1 in 4 chance it won't come back from a
power cycle"
*Any* journaled filesystem is an improvement.
Journaling on UFS is just one setting away. Boot single user from USB, then
run "tunefs -j enable /dev/da0" for your boot device da0. Done. I don't
know why FreeBSD does not recommend this for the boot volume, but I think
as long as you never fill up the disk you're ok. I've no had issues with
it.
______
That is an interesting idea. As I bought mine directly from the hardware
store, I don't install pfSense myself. I've never booted it from USB. As
this system doesn't have VGA, I may not be able to use a standard FreeBSD
image out of the box.

Are the FreeBSD 10.2 instructions (
https://www.netgate.com/docs/platforms/rcc-dff-2220/freebsd.html) still
valid for 11.1?


- Connect the console cable (I have that setup)
- Boot from from a memstick image plugged into the USB port
- From the Menu select 3, Escape to the loader prompt
- Enter the following commands
- set comconsole_port=0x2F8
- set comconsole_speed=38400
- set hint.uart.0.flags=0x0
- set hint.uart.1.flags=0x10
- set console=comconsole
- boot
- Select shell or LiveCD from the FreeBSD installer menu
- Run tunefs

Or does the 2.4 memstick installer give one an escape to shell option?


Walter
Post by Vick Khera
_________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
--
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding. -- Justice Louis D. Brandeis
Vick Khera
2018-03-08 20:35:28 UTC
Permalink
Post by Walter Parker
Are the FreeBSD 10.2 instructions (
https://www.netgate.com/docs/platforms/rcc-dff-2220/freebsd.html) still
valid for 11.1?
- Connect the console cable (I have that setup)
- Boot from from a memstick image plugged into the USB port
- From the Menu select 3, Escape to the loader prompt
- Enter the following commands
- set comconsole_port=0x2F8
- set comconsole_speed=38400
- set hint.uart.0.flags=0x0
- set hint.uart.1.flags=0x10
- set console=comconsole
- boot
- Select shell or LiveCD from the FreeBSD installer menu
- Run tunefs
Or does the 2.4 memstick installer give one an escape to shell option?
The hint lines for uart flags are unnecessary but harmless since FreeBSD 10.

The image does have a "live" mode where it runs entirely in ramdisk, but
nothing will let you set the serial port to the second port. You will have
to use these settings to use the second port.

You could try just booting to single user mode and run the tunefs. I don't
remember if that works or not for the boot volume with FreeBSD 11.
Continue reading on narkive:
Loading...