Pages

Saturday 8 December 2012

Experiences in building a home NAS - Part 1

Some months back I had this email conversation with a friend about the best strategy for storing movies/music/photos at home. I was running out of space, again. One of my external 1TB drives had crashed.

Hard drives fail all the time. To backup the hard drive, you need another. Multiply that by the number of drives you have. The bigger the hard drive which crashed, the bigger the psychic shock. And so on. The buy-another-1TB-drive-strategy was not working.

My friend had done some research on this topic, and he suggested building a NAS box out of commodity hardware. We exchanged mails back and forth and researched a lot for a month. There's a huge internet community doing the same thing so finding information was not a problem. An option which seemed attractive was combining a small server from HP with my own storage and RAM. This was the HP ProLiant MicroServer N36L/N40L.

The hardware

I finally went with this to avoid having to choose the non-storage hardware myself, being no expert in it. The specs for the N36L model were decent enough - 
  • AMD Athlon II Neo (dual core) 1.8 GHz
  • 1 GB included RAM (max 8 GB)
  • Seagate 160 GB
  • Gigabit ethernet
The RAM and HDD were of course, not enough. But the rest of it was. It came with 4 drive bays - to put my own hard disks in.

There were other factors in choosing this
  • I did not want a hardware RAID controller - it ties you down to the RAID controller.
  • I did not care for more storage bays.
  • It has 7 USB ports, with one internal which can host a flash drive to boot from, thus saving the other drive bays for storage!
  • To top it all, it was good looking.

The software

That's the hardware. What about the OS? NAS software? Like I said, my friend had already done some research, and I got the pointers from him. FreeNAS is a superb option if you're building your home NAS. It's based on FreeBSD and supports the ZFS file system. ZFS was conceived and implemented at the erstwhile Sun Microsystems. The virtues of this filesystem - I'll just point you to the documentation. Resilvering (auto repair) and copy-on-write are two of them.

Software RAID

ZFS supports various software RAID options, including RAID-Z. RAID-Z1 (the first level) essentially gives you the ability to survive one hard disk crash if you have 3.

With RAID-Z1, if you have 3 x 1 TB drives, you will use 2 x 1 TB for data and 1 x 1 TB for parity information. Even if one of the hard disks crashes, you can add a new one and you're fine. You use 1 TB to pay for recoverability. There are other configurations if you want more redundancy, like being able to recover if you lose more than one drive at the same time - which are more expensive as they need more disks.

Buying the hardware

I bought the ProLiant from a local dealer after scouring the Hyderabad classified pages. The HDD (3 x 1TB) and 4 GB of RAM I bought online, from Flipkart. Here are two bits of learning if you're doing the same
  • Buy ECC RAM if you want surefire data integrity.
  • Buy more hard disk space than you need now - you won't regret it. It will cost a bit more, but it's better than the alternative. The alternative would be to recreate all your ZFS volumes on the new (bigger disks). With my configuration, I'll have to buy 3 x (whatever GB) to upgrade my setup. Of course, the third option is that you don't go this HP microserver way at all, and choose a bigger box (custom built or otherwise) which supports more drives.
It's a NAS, so I needed a network switch, which I bought from eBay. This had good reviews, had 8 ports and supported Gigabit ethernet. Plus some CAT6 cables.

Putting it all together

The HP came with 4 drive bays, with one small (160 GB) HDD, where I installed FreeNAS. The other three went for storage (and parity). The RAM worked without a hitch, but that was because I had ensured it was compatible (see these hardware compatibility lists).


(All those cables were to connect my desktop monitor, keyboard and DVD drive to the HP server).




Internal view - the 4 drive bays in vertically placed in front.


The FreeNAS interface is easy to use, so setting up ZFS volumes was a breeze.



It also comes with a Cacti-like interface which lets you view OS metrics.




To complete the setup, I connected the 8 port switch to my ISP's router, and plugged in my desktop and the HP to the switch. FreeNAS lets you create NFS shares for the data stored on the NAS which I can then access as a network mounted volume.

There really are a lot of resources about setting up a home NAS in this configuration. Some of the ones that I found helpful are

This does not complete the NAS setup. Performance testing has to be done. And also, backups. I'll write about these in the next post.
     

Monday 19 November 2012

Nagle's algorithm and delayed acks...

...don't work well together. I finally understood why from Richard Steven's UNP book.

In a nutshell,

S(ender) sends a packet, and cannot send the second one if the second one's size is less than the MSS (maximum segment size) until the first is acknowledged (Nagle). R(eceiver) receives the first packet, but cannot acknowledge it until the receiver app tries to send data on which the ACK can piggyback (Delayed ACK). S waits, R waits - until the delayed ACK timer on R times out.


Thursday 15 November 2012

No more PermGen in the JDK

PermGen space is familiar to anybody who has debugged memory issues in large JEE applications. Starting with JDK 8, it is being removed. The JDK Enhancement Proposal says "remove", but it's more like it's being moved to native memory outside the JVM.

The goal as mentioned in the JEP is "to remove the need to tune the size of the permanent generation."

The beginnings of the change are already in JDK7, with interned strings being no longer part of the PermGen, but of the main Java heap - http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6962931

There's a summary of changes in the OpenJDK mailing list
http://mail.openjdk.java.net/pipermail/hotspot-dev/2012-September/006679.html

What does this mean for Java developers? Well, at least the following
- There will no longer be any need to set the sizes of the Perm Gen space with JVM startup parameters.
- The class metadata previously stored in the PermGen will now be stored in a native memory space called "Metaspace".
- The default maximum size of the metaspace will be limited only by the available memory on the machine. However, it can be limited by using the MaxMetaspaceSize option.
- It's not entirely clear yet as to how debugging tools like jmap and jhat will change, if at all, as a result of this, or if it will bring in more complications.

And of course, if you have classloader-leaks-based OutOfMemory problems in your application, they are not magically going away. They will just be transferred to another space with more memory than is allocated to your JVM now.

I could not find any reference to the origin of the term "metaspaces" except in this paper (PDF) - http://www.ssw.uni-linz.ac.at/General/Staff/TS/optimized_class_metadata_memory_management_in_a_jvm.draft.pdf
Familiarity with the JVM's memory management strategies is required to understand it - which I don't have currently - so it's on my to-read list as of now.

Saturday 20 October 2012

Lessons learned while managing technical operations

 for a cloud based SaaS product, and which might be useful to you if you’re doing the same.

  • There is no substitute to knowing your fundamentals. Whatever you’re managing - your own datacenter or a suite of apps on a public cloud - you have to know your Operating Systems, your Computer Networking, your Linux, your VMs.
  • Know your tools. Find out what tools you need to monitor, maintain and debug your systems. Know how they work, keep up with updates and play with them often. It will save you time when the crisis hits.
  • Know one editor and know it well - vim, emacs or other. Know all common shortcuts, complicated copying pasting routines, tips and tricks - in times of crisis, every second counts.
  • Learn a little every day. Share what you learn even if you think nobody’s listening. Soon you’ll find like minded people you can share ideas with.
  • Visibility to other teams of what you’re doing is very important. Graph it, present it, blog and talk about it.
  • Try to fill your team with the right people. The best people in technical operations have an eye for detail who do not lose sight of the big picture. They are good split-second decision makers and are experts in prioritizing in times of crises. And of course, they know their stuff or are smart enough to figure it out if they don’t.
  • Know your industry. Study what others are doing, and why.
  • Keep up to date. Know what is new in your field - subscribe to the best newsletters, RSS feeds, podcasts and conferences. There is a lot of noise, so take out the time to sift to the useful parts, adopt what is good for your operations and forget the rest.
  • Keep an open mind. Fads will come and go, old ideas will be repackaged and sold with a new coating every few years. Whatever be the case, keep up with trends - they always have something to teach.
  • Know your organization’s business. Interface and build relationships with all teams. If you cut away the trappings of the DevOps movement, the most important point that remains is collaboration. How you achieve it depends on you.

Friday 12 October 2012

DevOps Resources

I have been following the DevOps "movement" since its inception. Like any cultural meme that has value it has led to thousands of blogs, podcasts and now books on topics that are related directly and indirectly to it.

I've added a page on my blog linking to some podcasts that I have been following (on and off for some them), which might be of interest to somebody working in Technical Operations, Infrastructure, System administration or managing and architecting a cloud hosted product. It's linked from the top of my blog header - and also from here.

Wednesday 10 October 2012

Multiuser SFTP server setup - the solution

I had to setup an SFTP server on an EC2 instance recently, with multiple users chroot-ed into their own directories (with access to only those directories), and a different set of ssh-enabled users, with key based authentication for sftp as well as ssh.

My first instinct was to do a Google search. Many links came up, none of which solved the complete problem. Some of them did not work (different Linux distro/version) and some ended up disabling ssh when I got sftp working.

I finally found this blog post -

http://blog.famzah.net/2011/02/03/secure-chroot-remote-file-access-via-sftp-and-ssh/

It's the only set of instructions that actually worked, with all the constraints mentioned above.
For the record, the OS was Ubuntu 12.04 LTS. An additional step you need to take on this OS is to disable apparmor, or the ssh stops working after a reboot. I am not a Linux wizard, so I don't know yet why this happens.

On a related note, it turns out that a common mistake that many make is confusing FTP over SSL/TLS with SFTP. FTP over SSL is just FTP over a secure connection, while SFTP is a completely different protocol, with the file transfer happening over an ssh connection.

Tuesday 22 May 2012

Visualizing EC2

I hacked up a small application over the weekend to visualize EC2 instances at one place. Python/boto/web.py does the work of retrieving the data and JS (GraphDracula) displays it.

The need for this rose at work - where we use EC2 to host our infrastructure. The original intention behind it was an easier way to view things like
  • Which availability zone has which instances
  • Open ports between security groups
  • Which security group has which instances
  • and so on
Not all of this has been done yet, but will be.
Here it is on Github - https://github.com/talonx/ec2viz