Centralizing user/group management for Mac OSX with Zentyal

196497-96-20101011170315I have been following Zentyal since their eBox days, and Samba 4 since 2006. But finally, with the release of Zentyal 3.0 and Samba 4, it appears that my dream of an open source and drop-in replacement for Windows Active Directory has come true. Centralized management of computers, users, and groups for both Windows and Mac, for FREE! and with the option of enterprise level support. It won’t be long before this takes off.

I would describe Zentyal as an intuitive and friendly front-end interface for managing a Linux server. It is specifically designed to run atop Ubuntu Server linux distribution, which happens to be my favorite small and medium-sized business server platform. Samba 4 is a major rewrite that enables Samba to be an Active Directory Primary Domain Controller, participating fully in a Windows Active Directory Domain. Read more about Zentyal here and Samba 4 here.

The first part of this series is using Zentyal’s directory services for storing and managing users/groups in a centralized database so users can log in to both Windows and Mac machines with the same password. Getting Windows machines to authenticate to Zentyal is easy enough and well documented, but making it work with the Mac can be tricky. I am using Mac OSX 10.8 “Mountain Lion” in this example.

  1. The first thing you might want to do is install Zentyal. This article assumes you are able to install the Users & Groups module, configure LDAP server, and add a user.

At first I tried to get the Mac to work with the Samba4 service using OSX’s Active Directory Plugin. It immediately failed with an error message, and I spent all of two minutes on it before I stopped and realized something. Samba 4 is using OpenLDAP as a backend database. Why go through all the trouble of using the Active Directory plugin and mucking up my system with Microsoft tech? LDAP authentication is included and supported by Mac OSX!

One thing you want to note before going forward is Samba 4 takes over LDAP’s default port (tcp/389).  So Zentyal has OpenLDAP running on the non-standard port tcp/390.

  1. Zentyal’s firewall blocks LDAP by default. Navigate to this URL and change it to Allow: https://zentyalserver/Firewall/View/InternalToEBoxRuleTable

Screen Shot 2012-12-26 at 8.15.07 PM

Screen Shot 2012-12-26 at 8.04.15 PM

  1. Let’s get OSX to be happy with Simple Binding to the LDAP server by disabling SASL. This is required of OSX 10.7 and 10.8. More information about this step here and here.
    Search your LDAP server’s RootDSE for the advertised SASL methods:

    $ ldapsearch -x -p 390 -LLL -h zentyalserver -b "" -s base "(objectclass=*)" supportedSASLMechanisms
     supportedSASLMechanisms: DIGEST-MD5
     supportedSASLMechanisms: CRAM-MD5
     supportedSASLMechanisms: NTLM
  2. Now we make it so LDAP authentication requests from the Mac client will not attempt to use the aforementioned SASL mechanisms. The following three commands fix this problem by modifying /Library/Preferences/OpenDirectory/Configurations/LDAPv3/yourldapserver. Make sure you edit these commands to match the name of the .plist on your system.
    sudo /usr/libexec/PlistBuddy -c "add ':module options:ldap:Denied SASL Methods:' string DIGEST-MD5" /Library/Preferences/OpenDirectory/Configurations/LDAPv3/zentyalserver.plist
    sudo /usr/libexec/PlistBuddy -c "add ':module options:ldap:Denied SASL Methods:' string CRAM-MD5" /Library/Preferences/OpenDirectory/Configurations/LDAPv3/zentyalserver.plist
    sudo /usr/libexec/PlistBuddy -c "add ':module options:ldap:Denied SASL Methods:' string NTLM" /Library/Preferences/OpenDirectory/Configurations/LDAPv3/zentyalserver.plist
  3. Now we configure OSX for LDAP. Open “System Preferences -> Users & Groups -> Login Options -> Edit… -> Open Directory Utility.” Double-click “LDAPv3″ and click “New”. Enter your server’s name and click “Manual”
    Untitled Use custom port “390”
    Untitled 6
  4. Choose “RFC2307 from “Access this LDAPv3 Server using”.  Find your Search Base Suffix in /etc/ldap.conf and enter it into the box.
    $ cat /etc/ldap.conf | egrep ^base
    base dc=zentyal,dc=mycompany,dc=com
    Untitled 3
  5. Click through “Record Types and Attributes” -> Users – > NFSHomeDirectory.  Change it to map to: #/Users/$uid$. This tells OSX to create a local home directory.
    Untitled 4
  6. Click the “Security” tab and check “Use authentication when connecting.” Enter the Distinguished Name and Password that you find in /etc/ldap.conf.
    $ cat /etc/ldap.conf | egrep '^(binddn|bindpw)'
    binddn cn=zentyalro,dc=zentyalserver,dc=mycompany,dc=com
    bindpw wOTaarFLbZSbW0@q9RvH

    Untitled 5

  7. Click OK and you should now be able to connect to the directory and list users from it’s database.
    $ dscl /LDAPv3/zentyalserver -list /Users
  8. Try to “su” as a user that you added with the Zentyal Users & Groups module.
    osxclient:~ localuser$ su - joe
    bz101:~ joe$


Don’t panic if that last step doesn’t work. I found that, at least in OSX 10.8, that although you set “Custom Port 390,” this setting is ignored and it continues to use default LDAP port 389. I filed a bug report with Apple Enterprise Support. They confirmed it and provided me a workaround.

  1. Delete the dynamic data plist file for the server:
    sudo rm -i /Library/Preferences/OpenDirectory/DynamicData/LDAPv3/ldap.example.com.plist (Change the filename to match your system)
  2. Restart opendirectoryd:
    sudo killall opendirectoryd
  3. Double-check that Custom Port 390 is still set. If not, just enter it again.
  4. It should now work.

My All-in-One Datacenter In A Box

For the past two years I had two lab servers at home. One was a VMware ESXi host, the other was a Solaris-based fileserver. Then I learned about an interesting way to combine the two into one when I came across the “All-in-One Server,” a concept written up by the author of Napp-It.

virtualized_by_vmware1I would describe it as this: Your install ESXi on a server like you normally would, and create a virtual guest. You will use this guest to install your favorite fileserver OS (I chose OpenIndiana). This virtual fileserver will be given direct access to the hard drives for you to configure software RAID like mdadm, ZFS, or any choice of your own. This is thanks to Intel VT-d or AMD-Vi which gives your virtual guest direct access to the PCI disk controller (host bus adapter) that your disks are attached to. Your virtual fileserver then presents this storage back to ESXi for you to install all of your other virtual guests on. And there you have it, your virtual machines will now reside on a virtual NAS (NFS) or SAN (iSCSI) with virtual 10Gbe connectivity to ESXi.

zfs_logoOf course there are some downsides. For one, it is absolutely not a viable solution to host a production system. I’d venture to say the concept is pretty radical and no vendor would ever support it. Naysayers may complain that combining hypervisor and storage is a single point of failure. Or that if the virtual fileserver crashes then all the guests crash with it.

However for my home lab it still seemed almost perfect. Replacing two boxes with one would save space, clean up a mess of cables, save on power consumption, and reduce heat and noise. I could now increase the speed from hypervisor to storage ten-fold from 1Gbe (physical) to 10Gbe (virtual). It allows me flexibility by virtualizing the fileserver which is something I never thought I would have. And the best part, no need for a big expensive hardware RAID controller anymore. We all know how picky ESXi is about that.

So, I decided to do it. Here is my build sheet. It is based almost completely off of Gea’s manual (from the napp-it project I mentioned earlier) . Also many thanks to [H] forums and it’s members for their wealth of information regarding ESXi, VT-d and PCI-passthrough, the IBM M1015 SAS HBA, ZFS, and pretty much anything else surrounding this project.

1CPUIntel Xeon E3-1230 3.2GHz LGA 1155 Quad-Core
1MotherboardSUPERMICRO MBD-X9SCM-F-O LGA 1155 Intel C204
16GBRAM4x 4GB (16GB) ECC Unbuffered DDR3 1333
1ChassisFractal Design Define Mini Black Micro ATX
1Power Supply
1RAID UnitOrico 229rsh 2 x 2.5" SATA with RAID
22.5" SATA7200RPM SATA 2.5"
1Hotswap BayStarTech 5.25" to 4x2.5" SAS Hot Swap Bays
2HBAIBM M1015 8-port flashed with IT firmware
4SAS disksSeagate Savvio 10K.3 146GB 10K RPM 2.5" SAS
6SATA disksWestern Digital WD Green WD20EARX 2TB SATA

Here is a breakdown of reasons why I chose these items:

  • CPU- Only server grade processors (Xeon) support VT-d
  • Motherboard- Single socket motherboards are awesome for a low power server. This one has IPMI for remote management over LAN too.
  • RAM- ZFS loves ECC RAM
  • Chassis- I wanted a real server chassis but couldn’t find one that was good enough for my home. I wanted something small, sleek, sexy, and quiet with only 120mm fans or bigger. I “power searched” NewEgg for days and couldn’t find anything. I chose Fractal Designs based on my previously mentioned requirements, plus it’s small MicroATX form factor, it’s capacity for 6x 3.5″ drives, and 2×5.25″ bays for accessories such as the RAID unit and 2.5″ drive bays.
  • RAID Unit- This  stores your bootable ESXi installation and the VMFS datastore for your virtualized fileserver. I had options:
    1. Just a single SATA disk. Very simple. but ten years ago I vowed never to trust a single hard drive (or SSD) for any important function
    2. USB flash drive. ESXi is designed for low writes but the fileserver OS that I chose was not. Therefore a USB flash drive would soon reach it’s write limit.
    3. Two disks mirrored with the mobo’s onboard software RAID. This won’t work. ESXi requires hardware RAID card.
    4. Hardware RAID card. Expensive. Hot. It’s just one more device prone to failure and BBU requires replacement every 2-3 years.
    5. Dual disk enclosure with built-in RAID. These are software based RAID controllers with port multiplication. They perform RAID functions upstream from the SATA port they are plugged in to. Rather than configure it via the BIOS or OS levels, it is configured via a DIP switch usually on the rear of the unit. It is completely invisible to the OS so ESXi will never know that there is a RAID array behind the “SATA device” that it is booting off.
  • SAS/SATA Host Bus Adapter- This device will be configured for PCI-passthrough and presented directly to your virtual fileserver. If you want to use a Solaris-based or Linux OS, then check out Zoraniq’s blog post on this subject. I chose the LSI SAS 2008 card that IBM re-branded as M1015. System pulls can be found on eBay for much less than the LSI branded card. LSI 1068e is also a good choice but has a 2TB drive limit.
  • SAS and SATA disks- These disks will be connected to the SAS/SATA HBA which makes them visible to your virtual fileserver to configure for RAID (ie Linux) or ZFS (ie Solaris). I chose ZFS with wanted two storage pools aka zpools:
    1. Nearline Storage (SATA). This is your somewhat average storage with low I/O requirements for infrequently accessed archive data (movies, music, photos, etc). I bought six of cheapest 2TB drives I could find and, after downloading Grant Pannell’s zpool binary replacement for 4k disks, created a RAID6-like zpool with this command:
      # zpool create tank raidz2 [disk1] [disk2] [disk3] [disk4] [disk5] [disk6] # Change [diskN] to match your system
      # zpool status tank
      zpool status tank
        pool: tank
       state: ONLINE
        scan: scrub repaired 0 in 16h42m with 0 errors on Sat Sep  8 16:34:09 2012
              NAME                       STATE     READ WRITE CKSUM
              tank                       ONLINE       0     0     0
                raidz2-0                 ONLINE       0     0     0
                  c2t50014EE2B08EB028d0  ONLINE       0     0     0
                  c2t50014EE2B08EBDCDd0  ONLINE       0     0     0
                  c2t50014EE25B9AED71d0  ONLINE       0     0     0
                  c2t50014EE25B38FD69d0  ONLINE       0     0     0
                  c2t50014EE25B390134d0  ONLINE       0     0     0
                  c2t50014EE05826FA2Cd0  ONLINE       0     0     0
    2. Fast Storage (SAS). I wanted storage for I/O intensive data, like the virtual guests, databases, etc. I bought four 10K SAS disks for a RAID10-like zpool that I created with this command:
      # zpool create sastank mirror [disk1] [disk3] mirror [disk2] [disk4] # Change [diskN] to match your system
      # zpool status sastank
      zpool status sastank
        pool: sastank
       state: ONLINE
        scan: scrub repaired 0 in 0h25m with 0 errors on Sat Jul 28 13:06:52 2012
              NAME                       STATE     READ WRITE CKSUM
              sastank                    ONLINE       0     0     0
                mirror-0                 ONLINE       0     0     0
                  c2t5000C50005BDC58Fd0  ONLINE       0     0     0
                  c2t5000C5000AC59CD3d0  ONLINE       0     0     0
                mirror-1                 ONLINE       0     0     0
                  c2t5000C50005BDD7FBd0  ONLINE       0     0     0
                  c2t5000C5003BA6376Bd0  ONLINE       0     0     0
  • 4×2.5″ SAS Hotswap Bay- This is simply an enclosure for the 2.5″ SAS drives mentioned above.

After it was built I referred to Gea’s manual for the process of installation and configuration of software, storage, and virtual networking. It was up and running in one weekend. OpenIndiana exports NFS storage to ESXi over a virtual 10GBe connection. It is fast and stable and I am very happy with it.

There are several more things I would like to add to this post in the future

  • Photos of the build, and an illustration or infographic of the “all-in-one” concept
  • Benchmark results to prove my claim that it is “fast and stable”
  • The concept of virtualizing networking and virtualizing software-based firewall’s like m0n0wall or pfSense
  • vSwitches and VLAN-tagging, and using them to create WAN, LAN, DMZ segments to secure and speed up your network
  • The possibility of using SSD for ZIL and L2ARC cache

Speed up MacOS X file transferring over network

We had an issue with a group of MacOS X clients on a particular VLAN that were experiencing painfully slow transfers (~100KB/s) to our Solaris fileservers running netatalk (AFP). The problem was solved by tweaking a kernel parameter on the client that made it “more compatible.”

delayed_ack=0 responds after every packet (OFF)
delayed_ack=1 always employs delayed ack, 6 packets can get 1 ack
delayed_ack=2 immediate ack after 2nd packet, 2 packets per ack (Compatibility Mode)
delayed_ack=3 should auto detect when to employ delayed ack, 4 packets per ack. (DEFAULT)

Get the current value for net.inet.tcp.delayed_ack (default is 3)

$ sudo sysctl -a net.inet.tcp.delayed_ack
net.inet.tcp.delayed_ack: 3

Let’s try changing it to 2 and you should immediately notice a difference (this worked for us, you may also try values of 0 and 1)

$ sudo sysctl -w net.inet.tcp.delayed_ack=2
net.inet.tcp.delayed_ack: 3 -> 2

I’m not sure why OSX ships with a default of “3.” Our software vendor also did not know, nor did our enterprise Apple support. All I do know, is that Google is filled with people experiencing the same problems as far back as ten years ago. This fix is good for all sorts of network weirdness and compatibility issues with Samba, netatalk, FreeBSD, Solaris, Windows, etc etc.

To make this persistent across reboots you will need to use /etc/sysctl.conf. If this file does not exist, create it, or use the following command.

$ echo "net.inet.tcp.delayed_ack=2" | sudo tee -a /etc/sysctl.conf

You can read more here:
TCP Performance problems caused by interaction between Nagle’s Algorithm and Delayed ACK

Slow Linux router performance with VMware VMXNET3 adapter

I have a Linux guest under VMware ESXi 4.1 that runs OpenVPN and therefore acts as a gateway/router into the network. For weeks I was troubled with network slowness and weirdness. I knew OpenVPN wasn’t the problem because it’s fairly easy to configure and I’ve done dozens of similar deployments without any issues. Then I started reading some posts on the VMware forums about people complaining their Linux routers are basically unusable when using VMXNET 3.

I did happen to be using the VMXNET 3 adapter on this guest. It turns out this is a documented issue. The workaround suggests to set a module parameter option for “disable_lro=1″. I tried to do it using this guide but it didn’t seem to help me. For me, the solution was reverting back to using the more compatible E1000 adapter.

Read more about the issue in the vSphere 4.1 Release Notes:

Poor TCP performance can occur in traffic-forwarding virtual machines with LRO enabled

Some Linux modules cannot handle LRO-generated packets. As a result, having LRO enabled on a VMXNET 2 or VMXNET 3 device in a traffic forwarding virtual machine running a Linux guest operating system can cause poor TCP performance. LRO is enabled by default on these devices.

Workaround: In traffic-forwarding virtual machines running Linux guests, set the module load time parameter for the VMXNET 2 or VMXNET 3 Linux driver to include disable_lro=1.

My Solaris ZFS Home Server

I’m gonna start this new blog off right and dedicate my first post to my new Solaris based NAS/SAN build with ZFS filesystem. Since this is a home server my goals were the usual low cost/power/noise/heat/etc. For these reasons I chose a smaller mATX server motherboard and a cost effective Core i3-530 CPU combo. This CPU, when paired with the Intel 3420 Chipset on the motherboard, makes use of the ECC memory which ZFS requires.

The 6x 750GB RAID-Z (similar to RAID5) yields 3.3TB usable space. When attached to the LSI 1068e based Dell SAS6/iR adapter that I won on eBay for $25, they write at ~140MB/s and read at ~320MB/s. This build is fully capable of saturating gigabit ethernet, both upload and download.

OSOracle Solaris 11 Express
MotherboardSupermicro X8SIL-F
CPUIntel Core i3-530
RAMKingston 4GB (2 x 2GB) SDRAM ECC Unbuffered DDR3 1333
CaseAthena Power CA-SWH01BH8
Disk AdapterDell SAS 6/iR PCI-Express x8 SAS Controller
zpool (rpool)2x (mirror) 74GB Western Digital Raptor 10000 RPM
zpool (tank)6x (RAID-Z) 750GB Seagate Barracuda ES
$ pfexec mkfile 20g /tank/speedtest &
$ zpool iostat tank 30
 capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank        2.89T  1.17T     60      7  6.74M  97.4K
tank        2.90T  1.16T      1  1.28K  6.60K   138M
tank        2.90T  1.16T      0  1.33K     17   144M
tank        2.91T  1.16T      0  1.23K  10.8K   133M
tank        2.91T  1.15T      0  1.24K  2.70K   130M
$ pfexec dd if=/tank/speedtest of=/dev/null bs=16k count=1000000 &
$ zpool iostat tank 30
 capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank        2.91T  1.15T     60      7  6.74M   127K
tank        2.91T  1.15T  2.23K     25   285M   123K
tank        2.91T  1.15T  2.50K      0   320M      0
tank        2.91T  1.15T  2.30K     31   295M   167K
tank        2.91T  1.15T  2.44K      0   312M      0
tank        2.91T  1.15T  2.39K     30   306M   172K