labporn A fine mist forms in the lab.....
#1
A mist seems to be condensing in my apartment. Could it be? A cloud? Yes?! A cloud is forming!

I've been working for a few weeks now trying to setup an OpenStack cloud to prove the concept that we can run a virtual lab for people to play with. It's coming along nicely, but it's not nearly finished yet.

The pictures: http://imgur.com/a/4i4xQ

Be warned, this is a long write up!

The idea of creating an OpenStack deployment for club use has been kicking around for a while. My initial plans for the lab were to have a virtual environment that's easy to manage and use and could be accessed anywhere on campus via campus internet. I also have plans for a much less virtualized lab that people can access in person.

But in order to go through with these plans I really need a good understanding of how to implement them effectively and safely. That's where my second datacenter comes in to play.

I got really lucky about a month ago to pick up a used hard drive array for a SAN system. This was lucky for two reasons: first that I had been trying to implement a SAN just for the purpose of learning it, and second because I wanted to implement a SAN as the back end storage for the OpenStack deployment. This was also lucky because it came with a small rack that is now housing the entirety of my OpenStack Lab.


So the first hurdle I jumped over was understanding how SANs work and how to make different equipment play nicely. Getting the disk array up and running was a snap, but getting the storage network running correctly through my storage switches was not.

So if you don't know anything about SANs, SAN stands for Storage Area Network. A SAN is much like a normal ethernet network, but it allows storage systems to be linked together and grow rapidly and efficiently. SANs are similar in technology to NAS (network attached storage) but there is a key difference. NAS provides file sharing and remote file storage, SAN provides entire volumes (think entire hard drives) to remote systems.

My initial plan was to setup a somewhat large SAN that could provide entire hard drives to each of my cloud nodes as well as providing a large volume to the storage node. Setting that up on the disk controller was just as easy as configuring a RAID array and chopping smaller volumes out of that array. I spent a lot of time researching how to configure the SAN switches to work properly though. They require what are called "zones" to direct traffic to specific hosts. It's almost like routing. But figuring out how to set up the zones from scratch was a serious challenge. Eventually I discovered that you had to add the device ID for each device to the zone for them to communicate.

After getting the SAN stuff straightened out I moved on to getting the physical nodes setup.

OpenStack works by leveraging large amounts of resources on a network each running a specific task. Each computer or server in the Open Stack are called a node and generally only have one specific function, but the cool thing is that some functions can be run easily on the same node.

I started into this mess by trying to find some good books and articles about how to actually get this thing set up. What I found was that generally there are two networks used by OpenStack. You have a private network used to manage the nodes and services, and a public network that services being hosted on OpenStack use. This is normally not a problem at all, but it proved to be a slight challenge having limited hardware.

Most of my nodes have multiple network interfaces which is great, but my networking gear is limited. I'm limited to having only one router to handle my entire house, and I have two old Cisco switches that I can dedicate to the stack. With limited hardware my best option was to use VLANs. This was actually great because I had been wanting to become more familiar with VLANs but never really had the time or the need. I had all the hardware I needed, my router is VLAN capable and the Cisco switches are also VLAN capable, so the only trick was to get them configured properly.

The configuration on the switches was really easy. First you have to create the VLAN IDs to define the networks, and then you define which ports should be using those VLANs. Add in a trunk line back to the router and I was done, or so I thought. Trying to figure things out on the router end was a small task. I use a Ubiquiti EdgeRouter-X-SFP and I love it. There is a lot of community support for them, but for some reason I just was not understanding the VLAN thing. I had all of my ports on the router configured in a virtual switch so they acted like switch. When creating VLANs on EdgeRouters you have to select an interface for the VLAN to be configured on. It just became a mess after that. I tried it on the vSwitch and it worked for a while except that my traffic wasn't being segmented like I wanted. I then removed one specific interface from the vSwitch and it all went to hell.

Eventually I figured out that my VLAN configuration on one of my Cisco switches was not how I intended it. After correcting that I finally was to a point were I could work with what I had. So the moral of that story is to always double check your configurations before you go beating your head on the table.

So with all that crap sorted out I ended up with 4 VLANs configured: VLAN 1 for standard traffic as usual, VLAN 10 for management traffic, VLAN 11 for public traffic, and VLAN 12 for SAN traffic. I'm still working on the firewall rules to get things setup correctly though.

With the network figured out I started loading Centos 7 minimal on all my nodes. This was pretty painless except for one node. One of my systems had some issues with the video. The console would run off the screen so I couldn't see what was going on. It was an easy fix once I realized it was a problem with the monitor resolution. To fix it I changed the GRUB configuration to start with the right resolution and rebuilt my bootloader and all was well.

Pretty quickly I had 5 systems with Centos 7 installed. However one of my systems has an issue with its only processor and because of it has become unreliable.

Now that each node had Centos I could finally start loading up the OpenStack software stack. To make this go much much easier I turned to Ansible. Ansible is a really cool orchestration tool that allows tasks to be done in parallel across as many machines as you need. This means that I could run one command from one of my nodes and the same command would run on all other nodes. It works really well because its only required to be installed on the system that's initiating the commands. For the systems that are actually running the commands there is no client which means you never had to worry about keeping clients updated. Ansible uses SSH to run remote commands which is quick, easy, and secure with SSH keys.

There's even a really simple way to exchange SSH keys between linux systems. Once a key is generated you simply have to run "ssh-id-copy 'user'@'hostname'" and log in with a valid username and the keys are sent over. Once that's done you can use SSH without a password easily and securely.

So with Ansible setup and being able to communicate with all my nodes I began doing the basic setup for OpenStack. I'm currently still in the middle of installing all the services needed though.

At the moment I'm stuck with an error during one process installation. OpenStack uses domain names instead of IPs to find it's services. This is causing me a problem because I don't have a DNS server setup on any of the cloud networks I have running. This will be my next hurdle to jump in this process.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)