In recent years, TrueNAS SCALE has established itself as the go-to open-source solution for building a high-performance NAS for personal use. Based on Debian Linux and powered by ZFS, it promises the best of both worlds: the robustness of an enterprise-grade file system and the flexibility of a modern platform with containers and integrated applications. On paper, it sounds appealing. In practice, it’s a bit more nuanced—and that’s the focus of this article.
After several months of use, hardware migrations, late-night debugging sessions, and a few unexpected surprises, here’s an honest review of TrueNAS SCALE Dragonfish in a mid-sized personal infrastructure.
What is TrueNAS SCALE
TrueNAS SCALE is the Linux branch of the TrueNAS family, developed by iXsystems. It differs from the older CORE (FreeBSD) branch in several fundamental ways:
Linux/Debian-based — The system is built on Debian, which opens the door to much broader hardware support than FreeBSD, particularly for modern network cards and RAID controllers.
Native ZFS — The core of the system remains ZFS, with all that entails: data checksums, atomic snapshots, transparent compression, deduplication, and cache management (ARC) that can consume tens of gigabytes of RAM to speed up reads.
Built-in Applications — SCALE includes a container-based application system, allowing services to be deployed directly on the NAS without a dedicated VM.
Modern Web Interface — The UI has been completely redesigned and covers nearly all operations without needing to use a terminal.
The Real Benefits
ZFS: The Real Selling Point
This is the main reason to choose TrueNAS over a competing solution. ZFS offers a level of data protection rarely seen in the consumer market:
- Every data block is checksummed and verified upon read—silent corruption (bit rot) is automatically detected and corrected on redundant pools.
- Snapshots are instantaneous and require no initial storage space. They allow you to revert any dataset in a matter of seconds.
- ARC (Adaptive Replacement Cache) intelligently utilizes available RAM. On a machine with 32 GB of RAM, it’s not uncommon to see 20+ GB used in the ZFS cache during intensive transfers—throughput is immediately affected. And unlike other systems, this RAM isn’t “wasted”: ARC is dynamic and frees up memory if a process needs it. The more RAM you give TrueNAS SCALE, the more it benefits—read performance improves proportionally, and this is one of the rare cases where adding a few used DDR3 memory modules yields an immediately measurable return on investment in MB/s.
Advanced Network Integration
TrueNAS SCALE natively supports network bonding (LACP, Load Balance), VLANs, and purpose-specific interfaces. Having an administration interface on a separate management VLAN and shares on a dedicated data VLAN is fully supported via the graphical interface.
Active Directory Integration
SMB support with Active Directory generally works well once configured. The NAS can join a domain and expose shares with standard Windows ACLs, which is essential in a personal infrastructure running Microsoft.
Limitations They Don’t Always Tell You About
Network Card Hardware Compatibility
This is likely the most critical issue for a home infrastructure that repurposes existing hardware. Since TrueNAS SCALE is based on Linux, it inherits all the historical issues the Linux kernel has with certain network chipsets.
Realtek in particular should be avoided as much as possible for interfaces handling heavy traffic. The r8169/r8168 driver often works at startup but can prove unstable under sustained load—exactly what we expect from a NAS during nightly backups.
The recommended chipsets for TrueNAS SCALE are:
- Intel (i210, i211, i217, i350) — perfect native support, flawless
igb/e1000edriver - Broadcom (BCM57xx, BCM5709) — server-grade, very stable
bnx2/tg3driver - Chelsio — excellent Linux support
Rule of thumb: for data and bonding interfaces, investing €15–20 in a used Intel or Broadcom PCIe card saves a lot of trouble.
Network interface behavior after hardware changes
Under Linux, network interface names are determined by their position on the PCIe bus. If you change motherboards or move cards between slots, the names change (e.g., enp2s0 becomes enp8s0f0). TrueNAS retains the old configuration, which no longer matches the new interfaces—you must reconfigure it manually in Network → Interfaces.
This is not a bug; it is normal Linux behavior, but it is something to anticipate during a hardware migration to avoid being left without network access on the first reboot.
SMB/Active Directory Integration: Startup Nuances
When TrueNAS is joined to an Active Directory domain, the SMB service depends on the connection to the domain controller. If the machine boots before contacting the DC (network timeout, too-fast boot), SMB starts in a degraded state and shares are inaccessible—even though they appear as "active" in the interface.
Typical symptom: a repeated alert of the type WBC_ERR_WINBIND_NOT_AVAILABLE in TrueNAS notifications.
The solution is to configure the SMB service to explicitly wait for the domain controller to become available before starting—an option that is not enabled by default but makes a huge difference in day-to-day stability.
The Test Changes → Save Workflow
One interface detail that wastes everyone’s time the first time around: on TrueNAS SCALE, network changes are not applied simply by clicking Save. You must go through Test Changes and then confirm for the configuration to actually take effect. Without this step, the changes appear to be saved but are not applied.
Our Projects: Success Stories
Bonding Migration: From Load Balance to LACP
The NAS was initially configured with a network bond in static load balance mode (round-robin) across two physical ports. Migrating to dynamic LACP (802.3ad) in coordination with the managed switch significantly improved network stability and visibility.
The procedure is simple in TrueNAS: Network → Interfaces → edit the existing bond, change the protocol to LACP. On the switch side (H3C/Comware in our case), configure the corresponding ports for dynamic aggregation.
Result: both links up and stable, aggregation correctly negotiated, effective throughput improved for multi-stream transfers.
Pitfall encountered: TrueNAS is configured by default to LACPDU Rate FAST (sending negotiation packets every second), whereas most switches are set to SLOW (every 30 seconds). This difference in timer settings can prevent one of the links from coming up correctly. The status flags on the switch side showed a port in Defaulted and Unselected states—a classic diagnosis of a timer mismatch. The solution generally involves aligning the LACP timer on both sides.
Multi-VLAN Network Architecture
The target configuration—and this is a best practice to adopt from the start—is to separate traffic across two distinct interfaces:
- Administration interface: a dedicated port on the management VLAN, with a static IP. This is the point of access for the web interface and SSH. Very light traffic, tolerant of chipset limitations.
- Data interface: the LACP bond on the data VLAN, through which all SMB/NFS shares and backups pass. Must be on high-quality network hardware.
This separation offers several advantages: traffic isolation, security (the admin interface is not exposed on the same network as the data), and the ability to modify the data bond without risking loss of access to the management interface.
Backup Synchronization: The Pull Strategy
The goal was to automatically synchronize data from a Windows server to the NAS every night. The first approach—a synchronization tool pushing data from Windows to TrueNAS—proved unstable: SMB connection issues at the start of the night, timeouts, and silent partial synchronizations.
The shift to a pull approach from TrueNAS solved everything: a bash script on the NAS mounts the Windows administrative share via CIFS, runs an incremental rsync, unmounts cleanly, and sends an HTML summary email. This script is scheduled via cron.
Advantages of this approach:
- TrueNAS controls the timing
- Less reliance on SMB stability on the Windows side
- rsync natively handles resumption and incremental syncs
- The summary email provides immediate visibility into the results
Important note: ZFS permissions can cause issues when rsync-ing from a Windows share. The flags --no-perms --no-owner --no-group are often necessary to avoid permission errors that block the copy without clearly causing the script to fail.
Our Projects: Setbacks and Surprises
Motherboard Swap: Surprisingly Seamless
As part of a series of hardware upgrades, the NAS’s motherboard was replaced—moving from an LGA1155 platform to an LGA1150 platform with a newer processor. The two PCIe network cards (dual-port for the bond, single-port for admin) were physically transplanted into the new chassis.
At the first boot: TrueNAS SCALE launched normally, the ZFS pool was recognized immediately, the datasets mounted, and the SMB shares became accessible. Zero intervention on the storage configuration.
This is one of the major advantages of ZFS: the pool is attached to the disks, not to the machine. You can change the motherboard, move the disks to another system, and ZFS will restore its exact state—including ongoing transactions if a clean import is performed. For a personal infrastructure where hardware changes regularly, this is a significant assurance.
The only hiccup involved the network—the interface names had changed following the swap (normal Linux behavior; names are derived from the position on the PCIe bus)—but reconfiguring them in Network → Interfaces takes less than five minutes.
The snake biting its own tail: the story of Realtek
This NAS has gone through three successive motherboards, and the network administration interface has been the common thread—sometimes a blood-red one—throughout this whole story.
First motherboard: a Gigabyte with a first-generation i7. The integrated NIC handled the admin interface without a single issue. Everything was fine.
Second motherboard: migration to a Gigabyte Z77 (LGA1155). That’s when everything went off the rails. The Z77’s integrated NIC was a Realtek RTL8111, and under TrueNAS SCALE, it proved unstable for the administration interface—erratic behavior, the connection wouldn’t establish properly. The workaround: a single-port Broadcom BCM5722 PCIe card (salvaged from a Dell) dedicated to admin, with the integrated Realtek NIC disabled in the BIOS. It worked, but it was a workaround.
Third motherboard: migration to a Gigabyte Z97-HD3 (LGA1150) with a fourth-generation i7. Both PCIe cards were transplanted—the dual-port Broadcom BCM5709 for the LACP data link, and the single-port BCM5722 for admin. But after the swap, the BCM5722 came up with the physical link reported as down in TrueNAS, cause unknown (cable during the installation? finicky PCIe slot?). As a workaround, the Z97-HD3’s integrated NIC is enabled.
And then, the revelation: lspci | grep -i ethernet reveals a Realtek RTL8111 rev 06. Exactly the chipset that caused all the trouble on the Z77.
The irony: it works perfectly. The RTL8111 rev 06 is well supported by the r8169 driver under moderate use, and the admin interface doesn’t generate the heavy traffic that causes problems for this chipset. The BCM5722 goes into spare mode; the Realtek runs in production.
Takeaway: Not all Realtek chipsets are created equal, and usage matters just as much as the chipset itself. It’s the Realtek under sustained load that causes problems—not the Realtek that’s content to simply pass a few HTTP requests to the web interface.
Beyond Personal Infrastructure: TrueNAS in a Professional Environment
It would be reductive to limit TrueNAS SCALE to personal use. The platform has also proven itself in a professional context, on a significantly more demanding architecture.
Shared Storage for a Proxmox Cluster
A notable deployment, running on a version of TrueNAS SCALE Dragonfish 24.x that had just been released at the time: a blade server equipped with 600 GB SAS drives configured in a ZFS pool (RAIDZ1, the functional equivalent of RAID 5), with 96 GB of RAM dedicated to ARC. This TrueNAS server served as the shared storage layer for a three-node Proxmox cluster, with each blade having 64 GB of RAM.
The datastores were exposed via NFS and mounted directly on each node in the cluster. This architecture allowed Proxmox to manage high availability (HA) for virtual machines on multiple levels:
During normal operation, VMs were distributed across the three nodes according to placement preference rules—each VM had its preferred affinity node to optimize daily load balancing.
In the event of a planned node shutdown for maintenance, VMs would automatically migrate to their priority destination nodes, defined in advance for each VM. No manual intervention, no visible downtime for users—Proxmox orchestrated live migrations, with virtual disks remaining accessible at all times from the centralized TrueNAS storage.
This is precisely where ZFS demonstrates its true value in production: the 96 GB of RAM transformed the server into a massive cache, with virtual disk reads served almost entirely from the ARC. Read performance was remarkable for HDD hardware—and long-term stability, including during live migrations, was flawless.
What this experience confirms: TrueNAS SCALE is not just a tool for enthusiasts. With the right hardware configuration—generous RAM, robust network chipsets, and properly configured ZFS—it can serve as shared storage in a production environment without being outclassed by much more expensive commercial solutions. And on a version as recent as Dragonfish at launch, stability was right on point.
Practical Recommendations
Regarding hardware:
- Prefer Intel or Broadcom network cards for data interfaces
- For admin use only, a recent Realtek (RTL8111 v6+) is acceptable
- Plan for at least 16 GB of RAM, but don’t hesitate to go higher—ARC ZFS absorbs whatever you give it and delivers in performance. This is likely the best value-for-money upgrade for an existing NAS
- Check the available PCIe slots on the motherboard—some share lanes and don’t behave as expected
Regarding configuration:
- Separate the admin and data interfaces from the start
- Enable the domain controller standby option if joined to an AD
- For synchronization with Windows sources, prefer pull rsync from TrueNAS over push from Windows
- Never forget to Test Changes before saving a network configuration
- For rsync to ZFS, consider permission flags if the source is a Windows server
Regarding expectations:
- TrueNAS SCALE is an excellent tool but requires a basic understanding of Linux
- Hardware compatibility is much better than FreeBSD (CORE) but not universal
- Active Directory integration works but requires careful configuration
- Built-in applications are convenient for light-duty services, but a dedicated VM is still preferable for critical services
Conclusion
In 2026, TrueNAS SCALE is one of the best solutions—whether for a personal infrastructure or a mid-sized professional environment—for combining robustness, flexibility, and controlled costs. ZFS alone justifies the choice, and the web interface covers 95% of daily needs without a command line.
But like any powerful system, it rewards those who take the time to understand how it works rather than those who expect perfect plug-and-play. The projects documented here have taught us as much about TrueNAS itself as they have about the intricacies of networking, Linux permissions, and hardware reliability.
And sometimes, the solution to a complex problem is a Realtek chip that works perfectly well. Irony has a say in any self-respecting DIY infrastructure.
Article written based on real-world experiences—any resemblance to nights spent debugging is entirely intentional.