IT UPDATES @ HIVE13:
We now have a team of people working on IT issues!
Jon Neal, Ian Wilson and myself have emerged as a group of people that have been working on the network and IT infrastructure on a regular basis. If you’d like to be more involved, talk to one of us as there’s almost always something for you to work on if you’re so inclined. There are now google documents with all network information that are shared among the three of us. As time permits, configuration, network maps, etc. will migrate to the wiki but server usernames and passwords will remain privately shared among the “IT Team.” Should you require help with any resource you don’t have access to, ANY of the three of us should have all the information needed to be able to help you.
We have a stable firewall, finally!
9:26PM up 11 days, 7:06, 2 users, load averages: 0.00, 0.00, 0.00
After figuring out that the bge driver included in pfsense has a issue that causes kernel panics I switched the NICs to a pair of EEPro100s, which work wonderfully. The new firewall is on a HP Proliant DL360 G3 w/ 4Gb of memory and 2x 18Gb disks (RAID1) and redundant power supplies. It’s running a pretty vanilla pfsense 2.01 install at this point. More will be done with this later
The network numbering has changed.
If you didn’t notice, the hive is now using the 172.16.2.x subnet with a 255.255.254.0 (512 addresses) netmask. This was done to increase the number of available IPs in the dynamic address pool as well as to move away from the over-used 192.168.1.1 subnet mask. 172.16.2.50 through 172.16.3.200 (406? IPs) are in the DHCP pool with 102 (?) IPs available for statically configured machines. The Printer is now 172.16.3.240 in case you were printing direct to its jetdirect interface previously. If you have any other IP numbering questions, feel free to ask My
Core Network is much more sane.
Permanent infrastructure wiring has been moved on top of the FabLab - 48 port patch panel, SW2 (gig), SW3 (PoE + 10/100) are all in a small wiring rack. Theoretically, these connections shouldn’t need to be messed with much. Making use of the vertical space reduces clutter IMHO and makes it harder for people to trip on important wires accidentally.
We now have a gigabit backbone comprised of two gigabit switches (one in server rack, one in wiring rack) with multiple GigE connections between switches. All five switches permanently installed in the Hive support VLANs, although this functionality is not yet being used.
Servers have moved.
The server rack on wheels has been relocated to the corner of the hive by the CNC machine against the outside wall. It’s more out of the way there and the noise from the machines is less audible in the meeting area for those that care. (cough Jon cough)
We now have reliable UPSes with good batteries.
All core infrastructure (cable modem, switches #1, 2, 3, WAP, firewall, other servers in server cabinet) are on battery backup units. The APC BackUps750 previously used for HiveStor is now used for Switch 2 +3, the cable modem and the WAP which gets PoE from switch 3. There are two line isolating 900VA UPSes by the server rack to protect the servers and SW1. These each have 8 cells worth of battery hooked up. I let both of them run unplugged for 5 minutes under normal load and neither dropped below 93% reported battery life remaining. Automatic UPS monitoring and server shutdown still needs to be implemented.
There are more places to use the network where we need them.
Additional jacks and drops were installed to facilitate moving the server closet and for the new WAP. Each of the pallet racks now have a dedicated HP 10/100 network switch installed in them. The rack closest to the meeting area has its switch closest to the pillar in the center of the space. The rack closest to the dirty room has the switch installed closest to the exterior wall. Drops were added to facilitate this. There are also a few GigE drops labelled on the pillar by the meeting space - they also have bright yellow cords dangling for your use. More GigE drops can easily be configured as we’re at <50% port utilization on the GigE switches currently. We’re running low on CAT5 cable for running more drops but more can be acquired if you have an area of the space that you feel needs another jack - contact me.
We are using a new WAP (and it’s beefcake)
We are now using a HP Proliant WAP in place of the hodgepode collection of WAPs previously in use. It supports 802.11a and 802.11g with considerably more powerful radios that prior WAPs. This WAP supports VLAN tagging for wireless networks. Both hive13int and hivenet networks are served off this WAP (it’s spiffy!) and hivenet can only talk to the mac address of the default gateway. If you need to access resources on the wired network from wireless, use hive13int!
We have upgraded servers that are available for use.
In addition to the new firewall (which is overkill and a half for what it is doing) we now also have two other new servers:
-hubuntu is a Proliant DL360 G4 w/ dual 3.6 Ghz dual core Xeons, 8 Gigs of RAM and 2x 146Gb SCSI drives RAID1 with redundant power supplies. It lacks the VT instructions need to run a modern version of VMWare properly so it is running Ubuntu 12.04 LTS. This machine has been designated to become an LDAP server / Domain controller / shell server. (more about this later, see “projects” below) If you want an account on it, ask. It’s currently accessible via SSH from the outside world. If you want to run screen and idle on IRC or something, that’s fine. I’m eventually going to setup quota support and limit people to ~2gigs of storage each on the box because of its limited disk space, but for the time being you’re just going to have to behave yourselves.
-schizo is a Proliant DL360 G5 w/ dual 2.66 Ghz quad core Xeons, 16 gigs of RAM and 4x 73Gb SAS drives RAID1 with redundant power supplies. It is running the freebie VMWare Hypervisor 5.1. I’m currently using it for demoing a security project I’ve been working on at 2600 meetings, but there is PLENTY of capacity for running additional servers. I’m going to teach a VMWare basics class sometime during Q1 2013 so you’ll have an opportunity to learn more about this virtualization platform if you so desire. Craig Smith has already expressed interest in using this along with the snapshot features it supports for pen testing. If you have something you would like to have running, contact me when you have your OVF ready - it’s there to be used.
-HiveStor now has a gigabit NIC for increased performance. Previously, it was running on a RTL8139 onboard which is among the shittiest 10/100 cards available.
New network cameras
There is a new aimable-zoomable-fancypants IP camera mounted by the wiring rack. We’re working on using all of the cameras / integrating them with website more. Your input on this is desired.
Remote access (!@!)
hive13.blundar.com is a dynamic DNS entry that points to the IP of our firewall at the space. I haven’t had a chance to figure out the dns stuff for hive13.org and our firewall supports ZoneEdit (which I use personally) so I set this up as an interim measure. At this point, the only thing that is available for members is SSH access to hubuntu (port 22). Nag one of us to make you an account if you’re interested in using this. Eventually, VPN access may be available if there is interest. Eventually, remote access to IP camera may be available if there is interest. Further commentary and suggestions are welcome here.
PROJECTS AND FUTURE DIRECTION
Update network documentation on Wiki.
Obviously, we’re not going to plaster root passwords for every machine in the place but we are going to do our best to update the Wiki to reflect the current state of network wiring, servers, services and direction. Apologies for not having this done yet - at this point it’s done mostly through private documents shared among the IT Team.
Implement LDAP / Kerberos / Domain logons on Hubuntu.
Hubuntu is slated to become a Linux domain controller (via Samba), kerberos server and LDAP server. I want to move towards a single-sign on model for authentication for all machines Windows or *nix at the hive. Some machines may allow guest logons but machines that have valuable equipment (Laser, CNC, etc.) which require training will have mandatory authentication in order to ensure people using them have been adequately trained. This will also serve as a secondary auditing scheme for pay services such as the laser.
Deploy more useful machines in the space.
Jon and I started on putting a machine together for the electronics testing area. Many of the pieces of equipment we have in that area have digital interfaces that are not currently being used. We want to get the software and drivers necessary to use them installed and configured. Once it is more operational, I’ll be donating a PC based logic analyzer and oscilloscope to further enhance the capabilities of the space.
This is merely one example of what I would like to see more of: pre-configured machines with software and hardware tools already set up and ready to go do some productive stuff. If you see a need for such machines in the space, please bring it to my attention.
Revive and rebuild the "dirty network."
We have a VLAN capable network architecture at this point. We have a VMWare server where virtual networks can be created that will interact with physical networks following similar rules, including honoring VLAN tags for virtual machines. I want to have a “dirty” network that is isolated from the main network where experimentation with penetration testing, exploits, virii/malware reversing, etc. can happen in a controlled environment.
Continue to eliminate useless shit and make more effective use of what we have.
I do not want to let the hive become a dumping ground for useless shit. To this end, I’m trying to implement useful IT solutions at the hive and get rid of useless and outdated shit that is taking up space. From here forwards, any IT related equipment that we will be getting rid of will be announced to the list with a [CTO] PURGE subject preface. If you want any such equipment, you will have two weeks from the time it is announced to remove it from the space, no questions asked. After 2 weeks, any equipment named in a PURGE announcement may be sold, scrapped, recycled or thrown out with no further notice.
Also, the “minimum acceptable hardware” standards will be debated, determined and documented on the wiki. These standards will define the minimum configuration for hardware to be kept around the space. (exceptions can be made for machines that are running specific pieces of hardware, i.e. glass block, CNC, etc.) The idea here is to have a clear standard to use for evaluating what to do with donated hardware and avoid having a sea of Pentium 2 class hardware taking up space without doing anything useful.
As CTO, I’d really like to try to implement technology that helps the hive be a better place to get stuff done. If you think of something that would help, please contact me and let me know. I’m here to make technology at the hive better for everyone, not just me and my ideas.
I’d also really like to thank Ian Wilson, Jon Neal and anyone else that has been helping out keeping the technology at the Hive running. Without your help, things wouldn’t be half as awesome as they are.
Although I am rarely there on Tuesdays, I’m there in spirit!