I just completed migrating the LegRoom.net web server to a new hardware platform. It's now running on a dedicated host at a commercial hosting provider. The site should be much faster and more responsive now, as it's no longer running on a poky 33.6 KBps upstream DSL connection. It should also up more reliably as well now that it's no longer running under a virtual Xen instance on my desktop. :-)
In addition to the new hardware, I also made quite a few under-the-hood changes. I upgraded Drupal, changed up the apache configuration used for the website, restructured some of the website directories on the fileserver, and even changed the name of the database. LegRoom.net's been running for close to 6 consecutive years now, through various OS upgrades and hardware migrations, and it's built up a lot of cruft over the years. Since this was already a major change from what I had going before, I also took the time to reorganize things to make maintenance easier going forward.
Of course, I also tried my hardest to make sure everything looked exactly the same to end-users visiting my site, so unless you're reading this point you hopefully won't notice that anything has changed at all (aside from, of course, the speed). If you do find something that doesn't work - missing page, broken download link, lost credentials, etc. - please let me know ASAP.
Enjoy the upgraded site.
I came across a new (to me) Linux-related website a couple months ago that rather impressed me (which is something that doesn't happen all that often). The name of the site is Linux Kernel Newbies, and it's located at http://kernelnewbies.org/.
I stumbled across the site while looking for a good kernel changelog. Most changelogs that I've been able to find discuss the changes in one of three formats
None of these really provided the information that I was looking for. Documenting changes for each release candidate is fine if you're actually using/testing -rc kernels, but it's a pain when looking for changes from version to version because it requires looking through multiple posts or documents. The commit list approach is also fine for the gritty details, but unfortunately the summaries of each change are rather cryptic and often don't mean a lot to people not actively involved in the development. The new feature and major change approach is nice in that it's easily digestible and hits the highlights, but unfortunately it usually doesn't cover enough detail for me.
While searching for a decent changelog that was something in between the detailed commit list and a high-level summary, I found the LinuxChanges page on the Linux Kernel Newbies wiki. This is almost exactly what I've been looking for. They do a great job of describing all of the new/important features of the given kernel release, including providing links to the actual commit records if you really want the full details. They also provide a list of all individual commits,
logically grouped and sorted, which makes it much easier to understand what was changes. Finally, they even cover the highlights of new/upcoming patches are are actively being development for succeeding kernel releases.
While the changelog is what keeps me coming back every couple of months, Linux Kernel Newbies also offers a few other useful resources that may be of interest to Linux users, such as the KernelGlossary, FAQ, and Forum. The homepage also provides links to other content on the site.
I don't have any affiliation with the site, and to be honest I haven't spent much time on the site outside of the changelog pages, but even so I found it so useful that I wanted to mention it here. Hopefully some others can benefit from this site as well.
This website gets a lot of account registrations. Most of these are obvious spam accounts using fake e-mail addresses. Since e-mail confirmation is required on all new accounts, these accounts are never confirmed and therefore are never fully activated. I routinely run through the account list and delete these accounts to help keep things manageable. I also run through all of the new confirmed accounts every couple of weeks looking for obvious spam accounts that actually use a real e-mail address and remove these as well.
Since this website went live, that's pretty been the only account maintenance that I've done. At this point, 1 year and 4 months later, there are nearly 400 active accounts. This is after the obvious spam accounts were removed as described above. Unfortunately, the vast majority of these accounts have not been used at all since their initial creation. A lot of these accounts are spam accounts that use legitimate sounding names and e-mail addresses. A lot of them are also one-time use accounts: the user signs up, does whatever he wanted to do once, and then never returns (or at least never uses the account again).
Due to the large volume of inactive accounts, I decided to start cleaning up these old, unused accounts as well. For the first batch, I'm deleting all accounts that meet these two criteria:
So, basically, any user that signed up for a LegRoom.net account more than six months ago and has not used the account since then has had his account deleted. After the first batch of deletions, the total account count dropped from nearly 400 to about 130. Much better. :-)
If you've been affected by this, the easiest solution would be to simply recreate your account, and login periodically to keep it active. I'll probably continue to do this on a semi-regular basis going forward. If you have any questions or complaints, feel free to either leave a comment or send me an e-mail.
I had an issue with free disk space (or, more appropriately, a lack there of) on my server a while back. After some investigation, I discovered that my MySQL databases had ballooned in size to nearly 10 GB. Actually, figuring out that the /var/lib/mysql directory was taking up so much space wasn't that hard, but understanding why and what to do about it took a while (yes, I'm sometimes slow about such things).
It turns out I had two issues. The first is that MySQL configuration, by default, maintains binary logs. These logs "contain all statements that update data or potentially could have updated it (for example, a DELETE which matched no rows). Statements are stored in the form of 'events' that describe the modifications. The binary log also contains information about how long each statement took that updated data." This is fine and all, but (again by default) these log files are never deleted. There is a (configurable) max file size for each log, but MySQL simply rolls over to a new log when it's reached. Additionally, MySQL rolls over to a new log file on every (re)start. After a few months of operation, it's easy to see how this can take up a lot of space, and my server had been running for nearly four years.
Complicating matters somewhat was the fact that the default name of the binary logs changed at some point (and, according to the current docs, now appears to have changed back. As a result, I have several gigabytes worth of logs using the old naming convention, as well as several gigabytes worth of logs using the newer convention. Yay.
Like I said, recognizing that MySQL was taking up a lot of space is not hard, but I'm paranoid about my data and didn't want to risk losing anything. So, I kept putting it off until I was literally running out of space on a near daily basis. At that point I began doing research and figured out all of the above information. I also found a quick an easy way to fix the problem.
Note: This is meant for a standalone MySQL server. I'm not sure how it may affect replication, so please do not follow these instructions on a replicated server without additional research.
First of all, the binary logs typically reside in /var/lib/mysql/. You can check to see how much space they're currently taking up with this one-liner:
du -hcs /var/lib/mysql/*bin.* | tail -n 1. If it's more than a few hundred megabytes, you may want to continue on.
Next, check to see if you were affected by the name switch like I was. This is unlikely unless you've been running the server for at least a year or so, but it definitely doesn't hurt to check. Look at all
*bin.* files. If they're all named the same, such as mysqld-bin.000001, then you're fine. If you see some with a different name, such as both mysqld-bin.000001 and hostname-bin.000001, then you have an outdated set of logs doing nothing but taking up space. Look at the timestamps of the .index file for each set. One should be very recent (such as today), the other not. Once you've identified the older set, go ahead and delete all of them; they're no longer being used.
Finally, for the current set, login to MySQL as an admin user (eg.,
mysql -u root -p). You'll want to run the following two commands:
mysql> FLUSH LOGS;
mysql> RESET MASTER;
That's it. Depending on the size and number of your logs, those two commands may take a while to run, but the end result is that any unsaved transactions will be flushed to the database, all older logs will be dropped, and the log index will be reset to 1. In my case, these two steps dropped my from 9.6 GB down to about 5 MB. Good stuff.
Of course, this is simply a workaround to the problem, not a proper solution. What I'd really like to do is either automate this process so that I don't have to worry about the logs getting out of control, or even better configure MySQL to automatically flush its own logs after some period of time or it reaches a certain total file size. I haven't found any way to do this just yet, though I admittedly haven't looked too hard. I'd appreciate any recommendations, though.
In my last post I mentioned that I recently had a hardware failure that took down my server. I needed to get it back up and running again ASAP, but due to a large number of complications I was unable to get the original hardware up and running again, nor could I get any of the three other systems I had at my disposal to work properly. Seriously, it was like Murphy himself had taken up residence here. In the end, rather desperate and out of options, I turned to Xen (for those unfamiliar with it, it's similar to VMware or Virtual Box, but highly geared towards server0. I'd recently had quite a bit of experience getting Xen running on another system, so I felt it'd be a workable, albeit temporary, solution to my problem.
Unfortunately, the only working system I had suitable for this was my desktop, and while the process of installing and migrating the server to a Xen guest host was successful (this site is currently on that Xen instance) it was not without it's drawbacks. For one thing, there's an obvious performance hit on my desktop while running under Xen concurrently with my server guest, though fortunately my desktop is powerful enough that this mostly isn't an issue (except when the guest accesses my external USB drive to backup files; for some reason that consumes all CPU available for about 2 minutes and kills performance on the host). There were a few other minor issues, but by far the biggest problem was that the binary nVidia drivers would not install under Xen. Yes, the open source 'nv' driver would work, but that had a number of problems/limitations:
In fairness, issues 1 and 2 are a direct result of nVidia not providing adequate specifications for proper driver development. Nonetheless, I want my hardware to actually work, so the performance was not acceptable. Issue 3 was a major problem as well, as I have two monitors and use both heavily while working. I can only assume that this is due to a bug in the nv driver for the video card I'm using (a GeForce 8800 GTS), as dual monitors should be supported by this driver. It simply wouldn't work, though. Issue 4 wasn't that significant, but it did require quite a bit of time to rework it, which was ultimately pointless anyway due to issue 3.
So, with all that said, I began my quest to get the binary nVidia drivers working under Xen. Some basic searches showed that this was possible, but in every case the referenced material was written for much older versions of Xen, the Linux kernel, and/or the nVidia driver. I tried several different suggestions and patches, but none would work. I actually gave up, but then a few days later I got so fed up with performance that I started looking into it again and trying various different combinations of suggestions. It took a while, but I finally managed hit on the special sequence of commands necessary to get the driver to compile AND load AND run under X. Sadly, the end result is actually quite easy to do once you know what needs to be done, but figuring it out sure was a bitch. So, I wanted to post the details here to hopefully save some other people a lot of time and pain should they be in a similar situation.
This guide was written with the following system specs in mind:
Version differences shouldn't be too much of an issue; however, a lot of this is Gentoo-specific. If you're running a different distribution, you may be able to modify this technique to suit your needs, but I haven't tested it myself (if you do try and have any success, please leave a comment to let others know what you did). The non-Xen kernel should be typically left over from before you installed Xen on your host; if you don't have anything else installed, however, you can do a simple
emerge gentoo-source to install it. You don't need to run it, just build against it.
Once everything is in place, and you're running the Xen-enabled (xen-sources) kernel, I suggest uninstalling any existing binary nVidia drivers with
emerge -C nvidia-drivers. I had a version conflict when trying to start X at one point as the result of some old libraries not being properly updated, so this is just to make sure that the system's in a clean state. Also, while you can do most of this while in X while using the nv driver, I suggest logging out of X entirely before the
Here's the step-by-step guide:
uname -rto verify the version of your currently running Xen-enabled kernel; eg., mine's 2.6.21-xen
cd /usr/src/ && ls -l
ln -sfn linux-2.6.24-gentoo-r8 linux
emerge -av nvidia-drivers
emerge -f nvidia-drivers(look for the NVIDIA-Linux-* line)
bash /usr/portage/distfiles/NVIDIA-Linux-x86_64-173.14.09-pkg2.run -a -x
IGNORE_XEN_PRESENCE=y make SYSSRC=/lib/modules/`uname -r`/build module
mkdir /lib/modules/`uname -r`/video
cp -i nvidia.ko /lib/modules/`uname -r`/video/
Assuming all went well, you should now have a fully functional and accelerated desktop environment, even under a Xen dom0 host. W00t. If not, feel free to post a comment and I'll try to help if I can. You should also hit up the Gentoo Forums, where you can get help from people far smarter than I.
I really hope this helps this helps some people out. It was a royal pain in the rear to get this working, but believe me, it makes a world of difference when using the system.
In case anyone was wondering why legroom.net was offline for the past few days, it's because of a hardware failure that occurred Thursday morning. The server hosting legroom.net failed due to a yet unknown hardware issue. As a result, all web/mail/database/file/etc. services have been unavailable since then. As of early Sunday morning, I just finished migrating all of the data from the old server (well, I hope I got all of it) to a new OS install on a different, though temporary, system. As far as I can tell, all internet functionality should be restored at this point, including e-mail.
Needless to say, this was a major inconvenience, but I'm reasonably certain I didn't lose any data. However, if you happen find any errors on the site, or any missing content, please let me know ASAP. Also, if you sent me an e-mail at any point from Thursday morning to Sunday night, please resend it as I may not have received it.
This site will probably go down a couple more times, albeit very briefly, over the next couple of days as I continue working on the migration. I still have a few remaining issues to take care of. I'm hoping that by Tuesday the site will be up again for good (at least until the next migration to a permanent home...).
I finally ordered my hardware. I also finalized the list of components below and provided more details, rationale, and commentary about their selection. I also still need to finish some of the details of this post, which I've put off way to long (and I'm going to put off a bit longer), but that will eventually happen. Also, once I start building the NAS I'm going to document the pros/cons and any gotchas involved in setting up the system. Stay tuned.
For the past couple of months I've been researching various home NAS (network attached storage) solutions. Currently, all file-serving and backups are handled by a desktop system in my living room - which also handles my website and e-mail, multimedia functionality (hence running it in my living room), and a whole lot more. As I'm getting rather tired of the noise and recent instability, I want to migrate all functionality off of this system onto other systems/servers better suited to the tasks. My first step is setting up a NAS system for my house.
As I said above, I've been researching this for quite some time now, mostly because I'm having trouble deciding which direction I want to take. The two main choices are:
A commercial appliance would be the much simpler route, and is mostly what I'd been researching, but for various reasons I'm actually learning towards building a dedicated system now. The primary reason, to be completely honest, is flexibility. If I build my own hardware and install a "real" OS on it, even though it may be used specifically designed to function as a NAS device I still have the ability to do anything else with it that I may want or need. With an appliance, I'm much more limited in what I can do here (if it's even possible at all). Some appliances do allow remote console access, but every one I've seen is very vague on details as far as what can be done once you've logged in. Without being able to test it out myself, I have to assume that I won't meet my requirements.
So, why is building a dedicated system such a hard decision? This breaks down into two categories:
Let me address the efficiency issue first as it's more straightforward. This box will be running 24x7, and I want something that's going to be as quiet and energy efficient as possible. The majority of the appliances I've looked at were designed with this in mind, and while some are much better than others, all are more efficient than a typical desktop system. I want to stick this thing in a corner and not ever see it or hear it; just have it run reliably and not make a significant dent in my power bill.
The functionality issue is a bit more complicated. I stated previously that flexibility was the primary reason I wanted to build my own system, which may seem to contradict with this current statement, but they apply to different scopes. The former is about OS-level functionality; the latter is more about hardware functionality. Eg., two features I'd really like from my NAS are hardware RAID 5 using four disks and hot-swappable drives, both of which a fairly among among higher-end home NAS appliances. Hardware RAID is easy enough to do on a custom built system, but how-swappable drives is a completely separate issue; short of a rack mount server or tower-style case (both of which are ruled out by the noise/efficiency requirements), options are extremely limited.
With all that said, here are the components that I'm currently looking at. Any and all feedback, especially regarding personal experience, is most welcome.
Unlike most custom built systems where the case is a fairly insignificant component, this choice of components in this system is almost entirely dictated by the case as I've only been able to find one that meets both the noise/efficiency and NAS functionality requirements described above. As a result, my requirements (and personal preferences) are:
The case is the defining feature of this system. Whereas the system case can usually be considered an afterthought in most computer systems, for this project it was absolutely key. I wanted something small, quiet, and yet flexible enough to do whatever I needed. The hot-swappable drive bays in particular were a huge selling point. That, combined with the looks and form factor, is what ultimately convinced me that I could make a custom built system fit my needs rather than going with an appliance solution.
As a result, the case largely dictated much of the component selection below. It has a very unique form factor, which imposes several requirements on component choices. If anyone building a NAS decides to go with this case, be sure to do a lot of research on what can/can't be used. It looks like a slick little case, though, and if things work out as expected it should definitely be worth it.
Motherboard/processor selection was a really, really tough choice. I looked at many different options, covering the range from full dual-core Intel / AMD desktop procs to ultra low-voltage systems such as teh AMD Geode and integrated VIA processors. I ended up choosing the the EPIA SN10000EG motherboard based on a number of factors, including:
Note that the above list is not all "pros" for this board; it's simply a list of all factors that I considered. For example, I would've much preferred a dual-core 64-bit system, but the desktop choices were more power hungry than I wanted and the lower voltage solutions ("mobile on desktop") were of limited availability and maturity and prohibitively expensive. Most Mini-ITX systems are for some reason considered "industrial", which is just way of saying "expensive". $250 for a Mini-ITX motherboard and processor w/ integrated video is, sadly, pretty cheap compared to some of the other choices I explored.
Once I had decided on a VIA board, I looked at both the SN10000EG and SN18000G really hard, which are both essentially the same board though one uses a 1.0 GHz processor w/ passive cooling while the other uses a 1.8 GHz processor w/ active cooling. I actually had to fight my natural instincts to go with the faster board hear. If I was not including a high end hardware RAID controller in the mix then I definitely would've done with the SN18000G. However, I am including the RAID controller, while will take care of pretty much all of the disk processing. As a result, the more limited 1.0 GHz processor should be plenty enough to drive the gigabit NIC and take care of any other overhead. Plus, I like the fanless design and the fact that it draws literally half the power of the 1.8 proc. Like I said in my long introduction, power and noise are primary factors.
It's also important to mention that people building their own NAS systems in the future will actually have more/better options here than are available today. In particular, both the Intel Atom and Via Nano processors appear to be very very well suited for this type of system, packing plenty of power in a compact, low-power design. I'd especially love to use a Nano-based system for this, but it'll be several months before such systems are available.
RAM was basically an arbitrary selection - the above appears to be good RAM for a good price. However, RAM height matters here. From what I've read, if a CD-ROM drive is included in the case, then the drive will sit right on top of the RAM. Tall RAM will not fit. The SN10000EG motherboard I selected also poses a problem - since it includes a built-in compact flash card reader on the underside of the board, the board sits up a bit higher than usual. As a result, RAM height is even more of a issue.
I'm hoping that this DIMM will fit. I'll certainly report on it when I start building the system. From what I can tell now, it appears that if you're only using one DIMM, and a CD-ROM drive, you should be ok. If you plan on using two DIMMs, the CD-ROM will sit directly over the second DIMM, and as a result you'll need to get low profile RAM to fit it in. Alternatively, of course, the CD-ROM drive can be omitted.
The RAID card is another cause for concern. The special form factor case means that there isn't any standard method to secure add-on cards in place (eg., as you would screw a PCI card into the rear of a typical case). Chenbro makes a special PCI riser card and face plate for this, but it's limited to 32-bit PCI only, and only one recommended motherboard (hint: it's not the one I chose). As a result, I'm gambling a bit on the RAID card. I should be able to make it fit in the case with the above PCI-e riser card, but I'm not sure how/if I'll be able to secure it. Again, I'll report on this in more detail once I begin building the system.
As for the choice of RAID controller, I want with a high end 3ware card simply because I know it'll work and work well. Some things are worth paying for, and this is one of them.
Why terabyte drives? Why not? In all honesty, while they still carry a pretty decent price premium at this point, they're also among the best performing and quietest drives due to their high aerial density. Plus, they're huge. I'm building this system to act as the central file server for my entire home network, and quite frankly, I don't want to ever have to worry about running out of space on this system. With 2 TB of usable disk space (3-1 for the RAD 5), I'll have plenty, plenty of space to work with for the foreseeable future, and in the very unlikely event I manage to fill that up I can still slap one more TB disk in there at any time.
I chose the SpinPoint F1 drives not because the are the fastest, or quietest, or most efficient. Rather, I chose them because they are probably the best combination of all three factors. I spent a lot of time looking at various hard drives, evaluating them on these three criteria, and the SpinPoint F1 looks to be the leader of the pack at this point. Here are some detailed statistics if you're interested, courtesy of Storage Review.
The SN10000EG motherboard that I chose included a built-in compact flash card slot that's supposedly treated as an IDE device. As a result, I decided to go with a CF card as my main system drive. This will let me keep my OS install separate from the data drive RAID, and will generate less heat and noise than a normal hard drive as there are no moving parts. Of course, performance and lifecycle are concerns when using CF cards like this, but I plan on running my system mostly off of a ramdisk (hence the need for 2 GB). I'll discuss this more in my follow-up.
This was another arbitrary choice, but I've had decent luck with Samsung drives in the past and I liked the specs on this one. I should mention, though, that for this type of system a built-in CD-ROM drive is largely optional. It's needed to install the OS and nothing else. If you have a USB CD-ROM drive (or even a key drive) available, you could easily connect that temporarily to install the OS. In this case, I opted for the drive to have literally in case of any emergency situations - eg., my system died for some reason, and I need to boot off a rescue CD to recover/repair ASAP.
Network Interface Card
Currently I'm leaning toward OpenFiler as I have a heavy Linux bias, but FreeBSD seems to enjoy a stronger community following. I plan on investigating both. If both distros support my hardware properly, then all things being equal I'll stick with OpenFiler. However, I'm certainly not ruling out FreeNAS just yet.
The total cost of this system ended up being significantly higher than I first expected: $1425.29, plus tax and shipping. This is extremely pricey for a home NAS server, even a custom built one. In my case, I had certain requirements that I wanted to meet at all (reasonable) costs, and I feel the price was worthwhile. For anyone else, though, there are a few places where scaling back would save a lot of money.
RAID Controller - This was the single most expensive component. Find a motherboard that supports 4 SATA ports and setup either software RAID or on-board (fake hardware) RAID. It's definitely not as efficient or reliable, but it may work well enough depending on the requirements.
Hard Drives - I used three terabyte drives, which still carry a hefty premium. If you don't need an insane amount of storage room, consider smaller drives. For example, the 500 GB Western Digital Caviar SE16 (WD5000AAKS) can be picked up on Newegg for only $80, less than half the price of the terabyte drives I used. You can buy 4 of these for only $320, which in a RAID 5 array will still give you 1.5 TB worth of usable disk space. Also, you can skip the separate system drive (in my case, a compact flash card) and just install the OS directly on the RAID.
Case - There's no way around it - this is an expensive case. Unfortunately, if you want a compact, attractive, custom-built system w/ hot-swappable drives, this is pretty much your only choice at this point. However, if aesthetics and hot-swappability is not a primary concern, there's no reason you should need to spend more than $100 on a case, at the high-end.
Motherboard / Processor - Going with a different case will also likely free up some additional motherboard options. Eg., going with a standard ATX motherboard, or even a micro-ATX motherboard, will be far cheaper than a mini-ITX system.
Starting from scratch, and taking the above points into consideration, you should be able to knock at least half the price off of my system configuration, and even much more than that if focusing on budget rather than performance/features. Be sure to keep this in mind when spec'ing our your own system; don't let yourself be surprised by the total bill like I was. ;-)
For reference, here are the best appliance options I found:
to complete - QNAP, Storango, Irfrant, Thecus?