Sunday, March 15, 2009
Thunderbird's spam & junkmail filtering
SMTP first, Imap/pop second, content/spam filtering is still to come.
Postfix has sender address verification enabled, along with a few strict smtpd checks enabled and this works well to filter out the really obvious stuff from many zombies, but as expected with dspam out of the picture I was getting spam coming through to my inbox.
I use thunderbird on the desktop, and while working up the energy to have a good run at setting up proper spam filtering serverside, I decided to see if thunderbird's junkmail filtering system actually did anything useful. It was quick to setup and took minimal effort to train, so I figured it wouldn't hurt to give it a shot.
To give some context, I'm not getting a lot of spam hitting my inbox (compared to many), we're only talking about 30-40 per day. Enough to be annoying when you're used to getting only 1 every 2-3 days, as was the case with dspam in charge.
I enabled thunderbird's junkmail feature and started tagging spam by hand to train it. I configured it to move mail to a /junk folder, but I didn't mark it as read as I wanted to visually get a handle of how it was doing.
Fairly early on into the training I noticed that a lot around half of my spam had a common traight - it was all to and from my email address.
Since I only email myself (using the same to/from address pair) when testing something, this immediately lent itself to a very simple and obvious mail filtering rule:
If mail is
from: me@me.com and
to: me@me.com, then
1. Mark it is junk
2. Move it to the junk folder
3. Mark it as read
This very simple rule provided some automatic training data, and cleared my inbox of obvious junk without requiring any intelligence on the part of the junk-mail engine.
I've been Thunderbird's filtering for about a week now, and so far I've only had 1 false positive, and maybe 5 false negatives.
All things considered, and the very small subset of messages and short training periods are big factors, I'm very impressed with the performance. So much so that I'm wondering just how much work it's worth putting into a serverside anti-spam system.
Good job Mozilla!
If like me you assumed that a client side junkmail filter is likely something of a toy, I encourage you to actually give it a shot. I'm a convert!
roundcube webmail
I'd come across roundcube in my surfing once before, but it was a recent positive mention on one of the blogs that I follow that prompted me to check it out properly.
Roundcube can be found at www.roundcube.net and it comes in at a teeny 1.6MB tar.gz compared to horde's 26MB. OK yes, horde does a lot more than just webmail for that 26MB, but that was after all the only thing I actually wanted....
With a working apache and mysql instance already setup, adding another vhost, and database for roundcube was really simple. 10 minute job tops.
The initial configuration & "installation" is done entirely through the web interface by accessing the /installer directory.
This takes you through a nice systems requirement check screen, followed by an easy to use 3 step wizard which creates the rather complex config files for you.
You can either download them using the provided link, or just copy/paste the contents across from the text box onscreen, which I opted for. I was already sshed into my webserver anyway, pasting into vim was quickest.
You also have to create a log and temp directories that your webserver/roundcube instance has write access to.
It really was very painless to get through all that.
At the end of the wizard you can even test the configuration, both smtp, imap, and database access. This is one seriously polished product - top marks guys!
I did run into database issues again when trying to load the front page. The roundcube error log (/logs/errors) plained pointed to the cause though the database table didn't exist.
Just like last time I seem to have managed to start off with an empty database again. I still don't know what is causing this is, but the fix was the same as with horde:
mysql -u roundcubemail -ppasswordhere roundcubemail <>
With the database initialised, roundcube was ready for action.
And that's it!
I've been playing with it for 15 minutes so far and really love it :)
Do check it out :)
Wednesday, March 11, 2009
Horde webmail
My previous webmail software was squirrelmail. It worked very well, but it was also very basic. I'd seen horde, and it did look pretty darn sexy so I started there this time around.
I downloaded horde groupware webmail edition, as this version is meant to be prepackaged and easier to roll out for specifically webmail related use.... which is what I needed.
Reading through the docs/INSTALL was a bit daunting, and it really suggested that hoard requires a proper database backend such as mysql... which was really a bit more configuration than I wanted for a simple webmail platform. Oh well.
I installed apache2, php5, mysql5 using yast.
Upon starting mysql for the first time it helpfully warned me about the default mysql security, and that the user accounts needed tidying up, which I did.
Setting up the apache vhost next was dead easy! I just can't believe how easy it was compared to previous attempts.
Suse provides a very well documented template in /etc/apache2/vhosts.d/vhost.template, and it worked first try.
I next ran the horde configure script from ./scripts/setup.php.
For some reason I ran into issues creating the initial mysql database using the script, so I had to manually pump the mysql setup script into mysql using
mysql -u root -p******* < /srv/www/vhosts/horde/scripts/sql/create.mysql.sql
With that done I was able to run the setup script again to simply to create the tables (rather than the initial DB too). This worked fine.
The default horde mysql access is using a preset username and password pair, so using the mysql command line I went into mysql and created a user account specifically for horde to use, with a different password.
Finally I reran the configure script again to use the new username.
The login worked correctly. As it happens my imap server is on localhost and presumably that's where horde looks for it, which is fortunate as I wasn't asked about that at any time. It's probably in an easy to change file somewhere though.
And that's horde. Day to day use and configuration is something I'm sure you can work out on your own :)
Saturday, March 7, 2009
Need new mail server. Right now.
The old one was poked, bad file system corruption, and just wasn't worth trying to fix it. It was very old and had been dying for a while anyway. All of the actual maildir mail was on my solaris nfs server, so nothing was lost in that sense, but the job of building a new server to serve that mail up again has been something that I've been dreading.
I've been putting of rebuild/migrating to a new mail server infrastructure because I still clearly remembered just know long it took to build the first time around. The first time around I built pretty much everything from hand, and taught myself ldap along the way.
With mail queuing up around the world, and not just for me but also for friends and family too for whom I host mail, it was time to build a new mail server.
My initial immediate reaction was that I needed to fire up a new Solaris zone and to roll some sleeves up.
Before embarking on the arduous quest, I took a break for food and hopefully some near-divine inspiration....which didn't come.
What did come however was some perspective on my planned sadism.
[Open]Solaris doesn't come with courier-imap, postfix, amavisd, spam assassin... in fact it pretty much doesn't come with anything that I actually need to build a modern day open source mail server platform.
I thought this was somewhat ironic given that Solaris has always been a proud server OS, which is why many people complain that it's not yet ready for the desktop; because there is so much desktop software still missing, and yet here I am realising that the mail server lineup wasn't looking flash. In fact it's worse than not flash, it actually ships with sendmail...*shudder*
Sun does have the monstrous Sun Java System Messaging Server, but that was way bigger and more complex than I probably needed, and while in this case it wouldn't have been a problem for me, it's not free/oss software.
So, on to building all this software by hand again, on a non-GNU environment. fun!
No this is 2009, a year where more and more people are proclaiming that the OS doesn't matter any more... and you know what, they're right.
I think the measure of a good SA is using the right tool for the job, not just stubbornly and religiously sticking your belief in a near-holy, and faultless OS of choice.
There are a number of FOSS operating systems out there and while I love Solaris, I'm big enough to not only admit when it's not the answer , but to actually proclaim when it's not the right tool for the job - and this is that job.
For shame Solaris.... FOR SHAME!
No the answer is of course to use linux.
I've been using Ubuntu quite a bit for various tasks and installs lately so in the interests of learning something new, I decided to take another look at opensuse, this time at version 11.1.
Jumping on their website I was able to quickly search for the software that I wanted to use, and was pleasantly surprised to see that everything I wanted to use was there, and with very recent versions of said software.
I downloaded the iso via bit torrent, and installed into a VM called Athena - the goddess of heroic endeavor. (She is going to be doing battle with the evil doers of the 21st century after all. The spammers!)
Installing the distro was dead easy, and adding the software was very simple too. I won't bother outlining it.
I installed courier-authdaemon, courier-imap, amavisd, clamav and went about configuring everything.
I wanted to use nfs4, and I hit some problems that eventually went away with a reboot. I'm still not quite sure what went wrong, but I think it was something to do with the required kernel module not being loaded.
With the user names matched up, the appropriate firewall rules configured (a big plus in favour of nfs4. 1 static port for all nfs traffic!), the old mail store mounted and was accessible.
Setting up courier-authdaemon took quite a few tries to get right; my authentication and mail information is all stored in an ldap directory.
Once that was done, I moved on to postfix.
Postfix is pretty easy to work with, but there are a lot of lines to change in the config file and the version that came with opensuse 11.1 (2.5.5) was a couple of major versions newer than the 2.3.x that I'd been using previously so I took the chance to familiarise myself with some of the newer config options.
Pretty soon mail was flowing through nicely, and I left it there for the moment. Webmail and content filtering to come next.
Saturday, February 28, 2009
Openldap migration notes
Overview:
In much the same vein as my previous post, I will be migrating an openldap directory from one server to another (zone to zone in actual fact, not that the implementation is any different).
Unlike the previous mysql migration however, the source zone is running snv_95, and at that time Sun wasn't bundling openldap at all; the snv_95 zone is running a blastwave packaged copy of openldap, and it's at version 2.3.39.
The destination zone is running snv_105, which now comes with a native SUNWopenldap(r|u) 2.4.11 implementation, so this should be a nice upgrade along the way.
The source server was a dedicated zone named ldap. The destination will be my database zone (db) that we created earlier. I plan on eventually moving away from using openldap altogether, but for tonight I just simply needed to move it between servers so I could remove the last of the zones from what is actually primarily my file server.
The migration
On the source server, I dumped the entire directory using:
root@ldap ~]#ldapsearch -b dc=griffous,dc=net -h ldap.griffous.net objectClass=* > ldapdump.ldif
This works much in the same way as mysqldump as it happens.
Next I manually editted across my changes from the default in the openldap /etc config files.
/opt/csw/etc/openldap/ldap.conf to the new /etc/openldap/ldap.conf
/opt/csw/etc/openldap/sldap.conf to the new /etc/openldap/sldap.conf (the main file)
I'd just like to highlight again that this openldap directory exists solely to serve my existing mail infrastruture, so I was able to configure it in an identical fashion, including all schemas and copying across password hashes.
Were I starting again with this, I would probably look at setting up full SSL and better security. Tonight we're just moving it, warts and all.
I also had to scp across authldap.schema, since that's a non-default schema.
With the config files modified to suit, start the server with
root@db:/etc/openldap# svcadm enable ldap/server
I ran into issues at this point, and the smf and dmesg logs were not help at all. I ran the command that smf was using to start the server, with -d 5 (debug) to print out some extra information.
This printed out:
....
<= ldap_dn2bv(cn=manager,dc=griffous,dc=net)=0 <<< cn="Manager,dc="griffous,dc="net">,
/etc/openldap/slapd.conf: line 63:
slapd destroy: freeing system resources.
slapd stopped.
My initial suspicion was that it was something to do with capitilisation, but this turned out to not be the case. Line 63 was the rootpassword line, which also threw me for a bit.
After head scratching and looking at the rest of the debug log I spotted the problem further up the log. In the openldap config file I had entered
database bdb
suffix "dc=griffous.net,dc=net"
rootdn "cn=manager,dc=griffous,dc=net"
(The suffix line is wrong, and should have read "dc=griffous,dc=net"). Whoops, that one was clearly a pebkac mistake.
I fixed that, but still the server wouldn't start! The tail end of the debug this time printed:
unable to open pid file "/var/run/openldap/slapd.pid": 2 (No such file or directory)
slapd destroy: freeing system resources.
slapd stopped.
Sure enough, that directory doesn't exist... Given that this was a new install using default paths, this felt like a fairly systemic issue, so I quickly searched the opensolaris bug repository and found this bug which is exactly my problem. OKayyyyyy, so I'm on to an OS level problem rather than my own now!
I manually created the directory, and ran the command listed in the bug to check on the permissions.
As expected that directory needed to be openldap:openldap, which I quickly changed.
Once more with feeling!
I svcadm cleared the service once more, and finally the server started properly!
The data import
Next I loaded in my data with ldapadd:
root@db:~# ldapadd -h db.griffous.net -D cn=manager,dc=griffous,dc=net -w passwordhere -f ldapdump.ldif
adding new entry dc=griffous,dc=net
adding new entry cn=Manager,dc=griffous,dc=net
adding new entry o=hosting,dc=griffous,dc=net
addding new entry ....
...
Excellent, that bit was trouble free.
In theory this new directory should be ready for production use, but given that rather worrysome bug, I wanted to test it that it survived a reboot.
It didn't.
Under Solaris, /var/run is actually mounted on swap, which means that unless openldap is creating the /var/run/openldap directory when it starts, then it's going to be lost across reboots (as anything in /var/run should be).
It would seem then that openldap is not creating this directory for one reason or another when it starts each time.
The simplest answer to me seemed to be to tell opensldap to store these 2 files in the /var/openldap directory that already used for the BDB backend for directory data.
That way I don't need to mess about with changing openldap startup scripts or playing with the permissions in /var/run, as I would have needed to do if I simply changed the path to /var/ran/* rather than /var/run/openldap/*.
This solved the problem, and openldap now works across reboots.
Testing
I updated the CNAME for ldap.griffous.net to point to the db zone, and carefully watched the ldap traffic while sending myself a few test emails and logging into the server with imap.
Everything seemed fine, so I shutdown the original zone.
If everything stays working for another couple of days, I'll delete the old zone, formally completing the migration.
And that's how you move an openldap directory between hosts!
Friday, February 27, 2009
Quick mysql notes
I was doing a mysql migration from one zone to another machine, in the process upgrading from mysql 5.0.45 (bundled with snv_95) to mysql 5.0.67 (bundled with snv_105), and I'm just making some quick notes for reference.
First off, the migration of the database data is best done with mysqldump. Mysqldump can dump all of the tables live, and it can also be piped into mysql to push the data to the remote system rather than scping the dump across if you prefer.
Obviously you'll want to make sure that your client isn't making changes to the data at the moment of cutover.
Example command:
/usr/mysql/5.0/bin/mysqldump -u root -pshhhhhhh -B dspam | /usr/mysql/5.0/bin/mysql -u test -p -h 192.168.10.5 dspam
Out of the box, mysql is a bit wide open.
mysql> select user,password,host from mysql.user;
| user | password | host |
+-------+-------------------------------------------+-----------------------+ |
|root | | 127.0.0.1 |
| | | db |
|root | | db |
| | | localhost |
|root | | localhost |
This tells us that root can log into mysql from 127.0.0.1, db (hostname), or localhost, and that there are no passwords needed....yikes! There are also two 'anonymous' (no user needed) accounts setup.
At least this only represents a local vulnerability....
First lets drop all the extra accounts.
mysql> drop user ''@'localhost';
mysql> drop user ''@'db';
mysql> drop user 'root'@'127.0.0.1'; (doesn't work anyway, should be localhost)
mysql> drop user 'root'@'db'; (doesn't work anyway, should be db.hostname.domain)
That clears out the obvious ones, next update the root@localhost password.
mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('passwordhere');
And recreate the root@fqdn account.
mysql> grant all privileges on *.* to 'root'@'db.griffous.net' identified by 'passwordhere';
At this point I recommend backgrounding your existing mysql session (ctrl-Z), or using a second ssh session into the server to attempt another parrallel mysql login. If you can't get in, the foregrounded mysql connection is your last remaining chance to fix the problem so it's worth testing before ending that session and locking yourself completely!
As far as server configuration goes, the my-small-cnf.cnf configuration file is used by default, which is designed for systems with less that 64MB of RAM. I would change this to at least the my-medium.cnf which uses up to 128MB. (large uses 512MB btw)
Handy commands
Print all users accounts setup on the server.
select User,Password,Host from mysql.user
Show databases:
show databases;
use database; to open it.
show tables; to view tables in that db
/G at the end of all commands prints listings vertically, which can be easier to read in some circumstances.
mysqldump syntax at it's most simple is:
/usr/mysql/5.0/bin/mysqldump -u root -ppassherewithnospace dbnamehere > outputfilename.
I recommend using -B to specific the database name(s) even if you are just dumping 1 single DB, as it will add lines to the script to create the database on the destination if it doesn't already exist.
scp the output file across, and pump it back in again with a simple
mysql -u root <>
Adding users and access takes the following format:
mysql> grant all privileges on dspam.* to 'dspam'@'taraxen1.griffous.net' identified by 'password here';
To grant access to any host, us the % symbol rather than .*. i.e to 'user'@'%'
To grant access to the entire DB, use on dbname.*. Next is the username, and where they are connecting from. It seems that fqdns are favoured.
Removing users is pretty straight forward with the drop statement:
mysql> drop user 'dspam'@'taraxen1';
Show table type (remember that mysql uses different backends per table)
mysql> show table status
Show table structure:
mysql> describe dspam_stats;
Monitor the innodb engine:
mysql> show engine innodb status \G
Show currently connected users:
mysql> show processlist;
+-----+------+-----------+------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
501 | root | localhost | NULL | Query | 0 | NULL |
Monday, February 2, 2009
Solaris ldap naming - Part 3 Client configuration
With your DS now configured correctly, configuring a client to use [non-ssl] ldap is actually pretty straight forward. So much so in fact, I'm going to spend the bulk of this post explaining the bigger picture of how solaris clients interact in your network environment subsequent to making the change from a DNS client to an LDAP client.
The biggest distinction that I had trouble getting my head around was that unlike a windows domain, solaris naming & authentication services are very much distinct concepts, and for that matter identity and authentication can be seperated out too.
While we could just proceed on with implemention, I really feel that it's worth pausing for a moment first to clarify some concepts. I can't have been the only one to be baffled by the differences in implemenation so for the benefit of others I'd like to quickly run through the architecture.
For those that aren't interested, feel free to pagedown a few times!
Naming & resolution architecture
In the windows world, AD takes care of authentication (and identity), policy, profiles, home drives, and other useful information, but not host name resolution. Hosts *are* stored in AD, but this is actually only for computer authentication as part of AD's kerberos implementation; it's not for hostname *resolution*.
Host lookups are done by DNS completely independently of AD.
Now I realise that it is very common for windows DNS zones to be AD integrated, but this is just used as backend for the DNS server's database information.
From a host pespective, AD doesn't tell you what the IP address is of a host, windows clients instead rely on DNS for that information.
In the Solaris world, LDAP also takes care of identity, policy, profile, home drive information etc, and optionally it can also be used authentication too.
The significant difference between the platforms is that ldap can also used for host lookup information, in place of DNS. If you're trying to find a host named serverA.domain.com, and your ldap infrastructure was setup in this manner, you can do an ldap lookup for server!.domain.com and the ldap server would tell you it's IP address based on information in the directory. DNS isn't needed.
Solaris LDAP clients are configured by default to use ldap for host lookups, and DNS isn't used, making ldap your "one stop shop" for all centralised services.
The history for all this predates me in a big way, but I believe it's done this way is because that's the way NIS did it way back when NIS was THE centralised system, before DNS was the cornerstone that it is today. NIS was early 80s remember, and back then networks were small, DNS while more scaleable was more complex to implement and required maintaining an entirely seperate system - hence naming functionality was wrapped into NIS.
I expect that in the interests of easing the transition from NIS(or NIS+) to LDAP, solaris LDAP continues to include/provide this information in place of DNS so it can be a direct drop-in replacement, but with DNS being so ubiquitous I expect that as with this guide, most shops prefer to use native DNS.
Active Directory services summary:
User Identity/Authentication: YES
User settings/profiles: YES
SSO: YES
Host lookups: NO
Solaris LDAP services summary:
User Identity/Authentication: YES (Authentication is optional, kerberos can be used)
User settings/profiles: YES
SSO: No (Kerberos provides SSO functionality)
Host lookups (like DNS): YES (Though I don't think it's commonly setup this way).
It's that host resolution difference that I expect will catch most people.
To stress the point once more, by default when a client is reconfigured to use ldap, it stops using any dns configuration that may have already been configured for the host. Following an ldapclient init, despite /etc/resolv.conf still being configured correctly, it is no longer used because the updated /etc/nsswitch.conf (now the nsswitch.ldap template) no longer attempts to use dns for 'hosts'. For this reason, it's common practice to modify the nsswitch.ldap template to use dns rather than ldap for host/ipnodes resolution, while leaving ldap for everything else.
With the warnings and background out of the way, lets proceed with actually setting up an ldap client
Configuring a host to be an ldap client:
Here we will use the ldapclient command to reconfigure the host for ldap usage. It does a lot of the heavy lifting for you, though of course you can always modify all the necessary files by hand if you're mad enough.This command reconfigures the host to use ldap for naming and lookup information. If you wish to use dns for host resolution (and you probably do!), edit /etc/nsswitch.conf and change ldap for dns on the hosts: and ipnodes: lines.
Better yet, change the nsswitch.ldap template beforehand, that way you can jump backwards and forth while testing this without having to correct the file each time.
The command is run on an ordinary host that uses DNS for naming, and it doesn't have any ldap SSL preconfiguration setup, which means that the client needs to be able to contact the DS using a cleartext non-encrypted connection. (This obviously isn't desirable, but getting SSL up and running is a whole post in itself, coming later).
jwheeler@test:~# ldapclient init -a proxyDN=cn=proxyagent,ou=profile,dc=solnet,dc=com -a domainName=solnet.com -a profileName=default -a proxyPassword=proxypasswordhere 192.168.10.4
System successfully configured
jwheeler@test:~#
ldapclient does NOT register user/computer accounts in the directory in the way that an AD join would. Please note that this is not the solaris equilivent of a windows "domain join" operation. In fact no information is loaded into the directory at all, this is an entirelyread only operation that simply configures the client host to look to the directory for information based on the information in your default profile which is contained within the ldap server itself.
Great, my first client is configured. Now what, how do I test it?
For most people following this guide, this will be your first time deploying ldap in your environment, in which case your newly deployed directory will be empty and you'll need to prepulate it with some useful information to test it with. For starters, in the very least you'll want to load some user accounts into ldap, and ideally some automounter information.
ldapaddent is the command used to take local "databases", and loads them into ldap. Databases is the official term to descibe data sources such as your /etc/passwd and /etc/shadow files.
To load in user accounts:
ldapaddent -D “cn=directory manager” -f /etc/passwd passwdNote that this command only loads in the user accounts, and not the passwords, which are in your shadow file. You must also add the shadow file after doing the inital passwd import:
ldapaddent -D “cn=directory manager” -f /etc/shadow shadow
This pair loads your entire passwd database, including all system accounts and root, which you probably won't want. There isn't an option to add only specific accounts, so at this point I would suggest using the DSCC web interface to browse your ldap data and then using your mouse and the ctrl key, do a group select to delete all but the user accounts that you're wanting to work with.To test that your client is correctly connecting to the ldap server, and pulling information we can use the getent tool.This command queries the host's databases using whichever naming service(s) is configured, so it can also be used locally.
jwheeler@ldapclient:/home# getent passwd | grep jwheeler
jwheeler:x:101:1::/export/home/jwheeler:/bin/bash
jwheeler:x:101:1::/export/home/jwheeler:/bin/bash
2 answers were returned because the host is configured to try ldap first, but jwheeler is still contained locally (in /etc/passwd) too. Unfortunately this tool isn't too clear about where the data is coming from, but it should be fairly easy to deduce what's happening with a bit of intuition.With a positive result to that test, I next deleted the jwheeler entries from the local passwd and shadow files (after taking a backup first of course!), and attempted to login to the ldap client host remotely via SSH.
jwheeler@angelous:~$ ssh 192.168.10.10
Password:
Last login: Sun Feb 1 23:59:27 2009 from 192.168.10.60 Sun Microsystems Inc. SunOS 5.11 snv_105 November 2008
jwheeler@test:~$ id uid=101(jwheeler) gid=1(other)
jwheeler@test:~$ grep jwheeler /etc/passwd
jwheeler@test:~$
Excellent, a successful login via LDAP!You'll note that it wasn't single sign on (I was prompted for my passwd, despite already being logged into angelous as jwheeler), as ldap isn't a SSO system.To get an SSH working without a password prompt, you need to either use kerberos, or ensure that the user's public ssh key is already on the remote host in the user's home directory.
This is where the nfs automounter comes to the rescue.
On this ldap test host, the jwheeler homedirectory /export/home/jwheeler already existed, as it had been created and setup as a local user until moments ago, and I didn't get as far as physically deleting the directory from /export/home.
I also have a user called testuser that exists only in ldap, and hasn't ever existed on the test ldap client host, which I'll use here to illustrate the more typical result:
jwheeler@angelous:~$ ssh testuser@192.168.10.10
Password:
Last login: Sun Feb 1 23:24:29 2009 Could not chdir to home directory /export/home/testuser: No such file or directory
Sun Microsystems Inc. SunOS 5.11 snv_105 November 2008
-bash-3.2$ pwd
/
So if the user is a valid ldap user and nothing else, you still will at least log in however since your home directory doesn't exist, it isn't magically created. Enter the Automounter service.... With the automounter service correctly configured it will use magic (information stored in ldap) to mount an nfs mounted homedirectory for your user at login, which is excatly what we'll setup next.
The Automounter Service
The Solaris convention is to use /home for nfs mounted home directories, and to use /export/home for locally mounted home directories. The automounter service dynamically takes care of managing these nfs mounts for all active users on a system, so nfs mounts for active users are mounted transparently at login, and automatically unmounted after use.
Setting up the remote server for the nfs homedirectories is beyond the scope of this article; I'm sure you know how to create a directory on a server, and share(1M) it out via NFS. A full run down of automounter service and it's maps is also quite a big topic, so I'll only cover the bare essentials in the interests of brevity.
In it's simplest form though, configuring the automounter is dead easy.
The only key file that you need to modify is /etc/auto_home.
In this file add a single catch-all configuration line to the bottom of the file.
The simplest possible configuration is:
* fqdnnfsservername:/shared/homedirectoryroot/&
The * matches all users, the nfs mount is pretty self explanitory, and the ampersand at the end is substituted for the user's actual username.
For example, mine reads:
* supernova.griffous.net:/Z/nfshome/&
Start the automounter service (svc:/system/filesystem/autofs:default) if it isn't already, and that's all you have to do on the automounter side.
In ldap, you will need to update the homedirectory path for your user account to "/home/usernamehere", instead of "/export/home/usernamehere". I should point out the /home/usernamehere is the default behaviour, so you'll only have to update this for users that you've manually configured for local logins pre-ldap!The updated /etc/auto_home will work nicely now for *this* host, but wouldn't it be even better if you didn't have to manually add that line to this file for all future clients that you convert over to ldap? Fortunately, this can be done!In the /etc/auto_home file, comment out the +auto_home, and in the /etc/auto_master file, comment out +auto_master line. These are just used for NIS, which isn't used in this guide.
Next we load this information into ldap using ldapaddent again:
ldapaddent -D “cn=directory manager” -f /etc/auto_master auto_master
ldapaddent -D “cn=directory manager” -f /etc/auto_home auto_home
With this information present in ldap, the host will merge the information from the ldap versions of auto_* and the local /etc/auto_*. By default the local /etc/auto_* files are empty, so the ldap information is used. Ldap hosts are now setup for nfs automounts with zero configuration!
Next up, SSL configuration
Handy commands for ldap clients:
ldapaddent is the command that you use to populate the directory with information from the local machine. Typical examples would be local user accounts, which is to say the contents or your /etc/[passwd|shadow] files.
I.E: ldapaddent -D “cn=directory manager” -f /etc/passwd passwd
Note that this doesn't load up the passwords, you'll need to also add the shadow file after doing the inital passwd impart:
ldapaddent -D “cn=directory manager” -f /etc/shadow shadow
I got caught by that one!
getent - This is used for querying your naming service for an entry in a database, for instance a user in the passwd database
$ getent passwd user1
user1:x:1002:10::/export/home/user1:/bin/bash
ldaplist - By itself, it will print all DNs in the base, so typically you'd narrow it down to the 'database' that you're interesting in. In this case the term database just means a table such as the passwd table. Any entry that appears in your /etc/nsswitch.conf is called a database. To view the user list (passwd equivilent), do ldaplist -l passwd which gives an entry like this:
dn: uid=jwheeler,ou=people,dc=solnet,dc=com
cn: jwheeler
uidNumber: 100
gidNumber: 1
homeDirectory: /home/jwheeler
loginShell: /bin/bash
objectClass: posixAccount
objectClass: shadowAccount
objectClass: account
objectClass: top
uid: jwheeler
userPassword: {crypt}**************
shadowLastChange: 14276
shadowFlag: 0
ldapclient list will print the current ldap client configuration
jwheeler@test:~# ldapclient list
NS_LDAP_FILE_VERSION= 2.0
NS_LDAP_BINDDN= cn=proxyagent,ou=profile,dc=solnet,dc=com
NS_LDAP_BINDPASSWD= {NS1}*************
NS_LDAP_SERVERS= 192.168.10.4
NS_LDAP_SEARCH_BASEDN= dc=solnet,dc=com
NS_LDAP_AUTH= tls:simple;simple
NS_LDAP_SEARCH_REF=
TRUE NS_LDAP_SEARCH_SCOPE= one
NS_LDAP_SEARCH_TIME= 30
NS_LDAP_CACHETTL= 43200
NS_LDAP_PROFILE= default
NS_LDAP_CREDENTIAL_LEVEL= proxy
NS_LDAP_BIND_TIME= 10