You don’t need to use either NTFS or exFAT. These are non-native file systems to Linux and need a slower, compatibility, layer to be used. For NTFS, specific mount options can help and might be required to get the required permissions correct. big_writes is the option, but there are options to prevent file names that work on Linux, but don’t work on Windows from being used too.
OTOH, if you format the partition on the disk with ext4, you’ll get native performance and native file permissions. Unix systems care about file permissions since they are multi-user from the ground up. Formatting is a destructive process, so any data on the partition will be lost.
As for performance, using USB-ported storage will always be slower. I’ve found that any of the 2.5 inch “portable” storage HDD options will always be slower than a powered, 3.5 inch storage HDD connected over USB. There are SSDs in 2.5inch enclosures which will have excellent performance, if you like, but SSD storage is about 2x more expensive. I don’t use SSD storage for media due to the expense.
For media files that aren’t copied over and over, using quality flash storage should be fine. The native file system for that would be f2fs, but exFAT shouldn’t be too bad. Both require installing driver packages using APT. Names are: f2fs-tools and exfat-utils / exfat-fuse.
If I didn’t use NFS over the network to access media on a Linux server, I’d get an 8TB external USB3, powered, enclosure, split it into 2 partitions of 4TB each (my backup storage is 4TB sized), format each partition into ext4 file systems. That command is
sudo mkfs -t ext4 /dev/sdZ1 and
sudo mkfs -t ext4 /dev/sdZ2
where the “Z” needs to be discovered from the dmesg output. Then add each partition + file system to the /etc/fstab file so it gets mounted at boot, automatically. I’d use the UUID for the mount. I’d mount them to /D1 and /D2, but mount points are a personal decision. Linux has file system hierarchy standards which I’d try to follow. Google that term for where things belong.