ICC Home / Members / Meetings / Peer Support / Documentation / Projects
Minutes of October 11, 2007 ITAC-NI Meeting: |
Link to ACTION ITEMS from meeting AGENDA: CALL TO ORDER: This meeting was moved from its previous location in Dauer Hall in order to accommodate, on a trial basis, a proposal to videoconference, stream, and record our meetings for future playback. It was held in CSE E507 on Thursday, October 11th, at 1 PM. The meeting was run by our new ITAC-NI chairman, Dan Miller, Network Coordinator of CNS Network Services, and called to order just a bit late in order to give everyone time to arrive. ATTENDEES: Fourteen people attended this meeting locally. Possibly due to the late arrangements, there were no attendees via Polycom videoconference, though one member had mentioned intending to try to use that route. Ten members were present: Dan Cromer, Erik Deumens, Tim Fitzpatrick, Craig Gorme, Shawn Lander, Steve Lasley, Chris Leopold, Tom Livoti, Dan Miller and Handsford (Ty) Tyler. Three members were absent: John Sabin, Stephen Kostewicz and Clint Collins. Four visitors were present as well: Charles Benjamin, Chris Griffin, Marcus Morgan and Steve Pritz. Viewing the recording You may view the recording via the web at http://128.227.156.84:7734. You will need to click on the "Top-level folder" link, then the "watch" link next to the "ITAC Meeting 10/11/07" item. Cross-platform access may not be available; on the Windows platform you will have to install the Codian codec. Approve prior minutesNo corrections were offered and those were approved without comment. Request for secretary to take meeting minutesSteve Lasley volunteered to perform this role and was thanked by Dan Miller. Videoconferencing the ITAC-NI meetingsThe committee discussed our use of videoconferencing. There was interest in continuing our trial at least one more month so that access could be better advertised. The next question concerned exactly how widely we wished to do that. Craig asked if this meeting fell under the Sunshine Law. Ty replied that, since this committee did not regulate policy but rather held only an advisory role, rules regarding meeting announcements did not apply. However, Ty did believe it was understood in our charter and in prior discussions that anyone may attend these meetings though only the official members may vote. Tim asked Dan Miller how the current meeting had been advertised. Dan replied that the announcement went only to the ITAC-NI-L list and apologized for the short notice. Steve Lasley mentioned there were two avenues for live participation: the use of a remote Polycom system would allow direct interaction, while the videoconference could be viewed non-interactively via the web. It was suggested that the former might be reserved for members and those specifically invited by members, while non-interactive access could be more broadly advertised. Steve Lasley stated he had requested the videoconferencing trial not so much to allow committee members to participate remotely (though it was hoped that this might prove useful for that). Rather, Steve wanted to broaden access to a wider audience at UF and around the state; he mentioned that IFAS has many IT people across the state who may have interest and input into the matters we discuss here. Craig added that the HSC staff is also widely dispersed. While many might not be interested, Craig saw value in attempting to attract wider interest and in encouraging others to become more involved. After some discussion, it was decided that for our next meeting interactive access would be advertised via the ITAC-NI-L only, while a link for viewing the meeting via the web would be published more broadly to the Net-Managers-L List. We will also look into the possibility of password protecting interactive participation as well. Steve Pritz, who chairs the ITAC-DI committee, mentioned that Marc Hoit has been encouraging interaction among the various ITAC subcommittees. He suggested that the chairs of other such committees might be provided the password for interactive access as well, should that be available. Steve noted that the various lists originally had been set up so that subcommittee chairs were included on the lists of the other subcommittees as well; that is how he had heard of today's meeting. Dan Miller mentioned that he was not yet on those other lists. Steve said he would check with all the various chairs to ensure they were getting e-mail from each other's lists. He would then coordinate with Ann Goodson to find out who is in charge of each of those lists and get things corrected. Reclamation of underutilized IPv4 spaceMarcus Morgan provided a "State of the IP Address Space" address. Current IP address status UF utilizes three main blocks of space: As of yesterday, this address space has roughly 28,000 (27,734) active hosts within it. That means this space is about 43% utilized and the potential for reclaiming a considerable number of addresses is great. This range, used primarily within the Health Science Center and Shands, has roughly 20,000 active hosts. Space here is about 31% utilized. Reclamation for broader UF usage from this space would require higher-level coordination and is outside the scope of the current project. Our range of public numbers has roughly 70,000 active hosts. Capacity here greatly exceeds anticipated demands. Marcus explained that these numbers were obtained from NetFlow data which is made available at the routers and forwarded to a collection point for monitoring and analysis. A NetFlow Flow-Record consists of a pair of network addresses, a common protocol and an amount of data transferred. Since a router adjacent to a particular subnet should contain ARP elements for all devices on that subnet, dumps of those can and have been used to validate the accuracy of our NetFlow counts. In the past we might have run something like NMap to do these determinations, but that currently provides less useful information due to the existence of firewalls; also, we would be likely to alarm some people if we simply scanned their subnets. The NAT Pool Many hosts, particularly within the 128.227 /16 space, have migrated to private IP on the 10 /8 subnet. This is an appropriate thing to do, but the consequence is that we have a very large NAT pool. Network Address Translation is what allows our private addresses to communicate with the outside world. Currently we have 64 /24 subnets allocating roughly 16,000 addresses to our NAT pool. During the day utilization of that runs quite high and is monitored very closely so we can respond should a critical threshold be exceeded. Marcus also maintains a pool of "free space" which is mostly used to increase the NAT pool as necessary, typically at the start of the Fall term. Last year we added about 12 of these /24 subnets (roughly 3000 addresses) and so far this year we have added about 10 more. Consequently, the NAT pool has grown rather rapidly over the last couple years and we currently have only about 7 of the /24 subnets left in the free pool. This indicates that we have enough addresses to make it through this coming Spring, but certainly won't make it through next Fall unless some addresses are recovered for our free pool. The free pool is used to provide addresses for new departments as well as to increase the NAT pool. Address Recovery Marcus thanked IFAS for the recent voluntary reorganization of their space, moving most addresses from the public to the private subnet. This freed around 2000 addresses which have already been applied to the NAT pool. Marcus is currently looking at every subnet allocation, particularly within the 128.227 /16 space, and trying to identify addresses which may be recovered for the free pool. This may be handled either by moving addresses to a separate smaller subnet before recovering the old one, or by shrinking a subnet and recovering addresses off the top. Whatever methods are used, there will be renumbering involved and shrinking the allocations has DNS and netmask implications. Consequently, a lot of coordination and effort will be required. The good news is that our current utilization tells us there are a lot of unused addresses available. Marcus will soon begin working with departments in order to start recovering those. The Long-Term Steve Pritz asked if our current address space represented a real limit and Marcus responded that it did. We will not be able to request another /16 chunk under IPv4 and must be frugal with our current allocation until IPv6 is mature enough to carry us into the future.
Charles Benjamin asked what progress UF has made with regards to IPv6. Dan Miller responded that there hasn't been a great deal of progress in recent years. The core network has been upgraded so it can support IPv6 natively, but there is still the "what does that buy you?" issue. We have discussed internally the need to restart those design discussions in earnest.
Dan Miller then raised the question to Marcus, "How long can we continue under the current model?" Marcus believed we very likely can sustain things 3-4 years. He thought one pressure we might feel before long, however, is a need to communicate IPv6 natively with outside groups who may be moving along more rapidly than are we. Chris Griffin related that FLR, one of UF's providers, has applied to ARIN for an IPv6 /32 network. While this may sound pretty small in IPv4 terms, it is actually extremely large. FLR will in-turn allocate space to each of its core members, UFL included. That will become UF's official IPv6 space--or at least one of them. Chris Griffin mentioned that one of the big issues with IPv6 is that there are several techniques which people are advocating for multi-homing--one of which is that you would get addresses from each of your providers and your systems would actually have multiple addresses. Consequently, it is unclear at this time how many IPv6 ranges UF will have.
In terms of IPv6 preparedness, our core is ready. As soon as we get the allocation from FLR, we can IPv6-enable our backbone. Chris then assumes we will rollout IPv6 as a production service. Outside of the Pacific rim or Africa, there is not all that much IPv6 out there yet, however. Charles Benjamin mentioned that our federal government has mandated that all their backbones be IPv6 by 2008. Chris Griffin said that the Navy has been leading that effort. This does not mean that IPv4 will go away for them, however, as they have provisions for legacy support. Chris Griffin said that IPv6 will be very challenging. DNS becomes extraordinarily important in an IPv6 environment; the addressing is not very "human friendly". Craig asked about Cox peering, since their network is not IPv6 capable. Chris Griffin responded that there are a number of methods for co-existence of IPv4 and IPv6. Host may be dual-homed, having both IPv4 and IPv6 addresses. You can have what is called IPv4 compatible addressing where your devices utilize IPv6 and some intelligent edge equipment does translating to allow you to talk to IPv4--similar to NAT. There are other methods which allow IPv6 machines to understand when some connecting host is IPv4 and to handle it in a way the remote host can understand. There are many techniques, but this does not just relate to Cox; we will have to deal with those for all of our peers. Chris Griffin admitted that there is not a real clear view currently of what our IPv6/4 enterprise network would look like. The IPv6 standards body has actually just finished, so the base for IPv6 is now in place. Now they have to address all the shortcomings of the standard. Chris believes that we will see more rapid development now that the base standard is closed, with small working groups solving specific issues. Hopefully, some reasonable technique will be presented for the multi-homing issue. Chris Griffin mentioned that MS Vista currently has IPv6 tunneling capabilities and there may be a little of IPv6 going on right now via that method. Dan Miller attempted to summarize by saying that the topic of IPv6 is heating up and that it will likely be a topic of further discussion at future meetings. Chris Griffin said that it has certainly played a role in their purchasing decisions, making sure that all their core equipment can handle that (in hardware and not just via software). They have done some pretty good testing of that, though there are some supplementary systems which they haven't tested out for IPv6 yet--the firewall is one. Marcus mentioned that his recovery project certainly cannot wait on IPv6. He also noted that there are many details to be worked out on IPv6, including routing and aggregation issues; if the aggregation issue is not resolved, the size of routing tables will grow out-of-hand. Committee position statement Dan Cromer proposed that our committee draft a support statement for the IP address recovery project, backing the efficient use of our IPv4 address space on campus. Charles Benjamin mentioned having been in contact with Ryan Vaughn in moving to private IP from public. Chris Griffin mentioned that conversion to private IP is also being addressed by the wallplate project as new buildings are added under its support. Steve Pritz wanted to reconcile the 3-4 years breathing room we may have under our current IPv4 model with the concern that moving to IPv6 is going to take quite some time. Dan Miller replied that it is our hope that those two would meet somewhere a few years down-the-road. Our situation is not unique and there are other institutions having similar problems. Chris Griffin assured us that our network would be ready to work with whatever solutions were developed on the broader stage. The real issue is the deployment of content via IPv6. We can implement a network to utilize IPv6, but there has to be somewhere to go with it. Chris Leopold asked if we could buy any time by reducing the NAT lease time, but Chris Griffin replied that this is already at 10 minutes, which is razor thin, especially for UDP traffic. Craig asked Marcus if he has talked with Shands regarding what they are doing with their 158.178 /16 address space. Marcus replied that somebody (certainly not him) may eventually have to look into that, but he is currently focusing only on the 128.227 /16 space for his recovery efforts. Chris Leopold relayed a rumor that Shands might be moving away from UF's network altogether. Dan Miller believed that this rumor may have resulted from confusion about Shands launching a special Internet for their patients/public through Cox. Tom stated that there are no plans for Shands to move away from the UF network for the foreseeable future. Dan Miller returned us to Dan Cromer's proposal that the committee endorse the efficient use of IPv4 space on campus, stating that departments should not hoard address space and should be cooperative when approached regarding address recovery. Tim responded that it would be useful to have a multi-year game plan for reclaiming the space, and that we should draft a short policy/directive/position statement about that and bring it all back here. All agreed. Tom mentioned that it would be nice to have a list of what is considered approved networking hardware with regards to IPv6 as well. He hadn't been aware of any significant steps toward implementing IPv6 prior, and would like to be sure they stay in step with any plans which UF net-services might make. Network Edge Protection and 802.1xDan Miller introduced this topic by saying that CNS, as part of managing the edge networks, has been promoting, researching and field-testing new technologies for increasing network availability. For our purposes, he has grouped those technologies here under the term "Network Edge Protection". 802.1x is a method of network authentication which is related to these topics. Even though these things have been deployed in a number of buildings, the project is still in its preliminary stages. Its purpose is to maintain the availability of building and core networks and to identify problems which otherwise might go unnoticed for a time. They have built this into their design standards and the deployment began to wallplate buildings in the Fall of 2006. Last month it was decided that, while there are still some issues outstanding, we now have a large enough base and we have paused further deployment. Dan then handed off to Chris Griffin for the technical overview. Chris Griffin explained that network edge protection methods fall into several different categories: Port Security, Storm Control, DHCP Snooping and 802.1x Port Security -- to prevent loops and abusePort security is basically a means for limiting the number of MAC addresses which an individual switch port will learn. If the number of allowed addresses is exceeded, the port goes through a shut-down state which will block that port. The current design sets this number at 16 MAC addresses and the port is shut down for 5 minutes. The number 16 was an evolutionary step from an original setting of 3; 3 is necessary with VoIP phones because they use a separate MAC when registering (thus using two) and a computer connected via the phone would account for the third. The increase was made in order to avoid false positives and their concomitant pain. Originally, the plan was to block the connection of any device which could function as a switch--as well as to protect the local network from any loops not detectable via the Spanning-Tree Protocol. With the advent of inexpensive workgroup switches which do not handle that protocol, it is very easy for one of those devices to be configured in a way which would cause a loop that cannot be detected--allowing something as simple as plugging in a cable incorrectly to wipe-out an entire local network. This is especially concerning now that we have VoIP phone service which is dependent on those same networks. As a side-effect, which is both a good and a bad thing, port security also does MAC-level anti-spoofing. What this means is, if you put a device on the switch for which the MAC address is already known by the switch, it will block the port. Chris believes this feature is likely due to the fact that port security was originally designed as a method of preventing spoofing the router or spoofing any other significant device at the MAC level. The feature, however, has proved problematic in two regards: When asked about potential problems with servers running multiple virtual machines (which in bridge mode would mean multiple MACs per port), Chris Griffin pointed out that these measures are mainly targeted at workstations and would not generally be activated on ports which connected to servers. They are not too interested in doing a whole lot of enforcement on server ports, with the assumption that those ports are connected to well-maintained well-managed systems. This is for the edge networks where there are lots of people coming and going and trying to do many interesting things, and is an attempt to provide some stability for that environment which otherwise wouldn't be there. Someone asked if server room switches and cards were segregated from workgroup switches and cards. Chris responded that they typically try to identify switches that mainly contain servers, but still want to be able to provide port-by-port control. Identifying which ports serve which roles and maintaining that over time is a remaining challenge. We need to find those ports delegated to servers, document those, and provide an easy method for subnet managers to change that--preferably in an automated fashion.
The question was asked whether this 16 MAC addresses/port setting could be tuned in the future. Chris responded that, while it can be tuned, the 16 MACs/port setting is intended only to catch a severe situation where some external device was cabled incorrectly so as to cause a loop. We do monitor the network closely and log port accesses, however; data mining of MAC usage per port can be used to address potential AUP issues regarding "Network Infrastructure/Routing". Perhaps an e-mail could be sent out to the subnet manager explaining there are too many MAC addresses on a port and to please look at resolving the issue. Chris Griffin later stated that, along with Port Security, there are a whole host of features which they have implemented on the network and which they have zero issues with. These include BPDU Guard and Loop Guard. BDPU Guard looks for free bridge protocol data units (BDPUs), which you should never have on ports connected to a single host, and shuts down the port if those are detected. This assures against someone adding a switch and extending the network. Loop Guard protects the switch so that, should it get so overwhelmed as to stop sending Spanning-Tree messages, it will block ports that might potentially unblock otherwise. Storm Control -- to protect against faulty equipment and compromised hostsStorm control is another method by which we can prevent loops due to the fact that those generally create packet storms. Its primary focus, however, is to protect networks against faulty equipment and compromised hosts which may flood the network. This is mainly focused on preserving a building's local network. We currently limit broadcasts to 5000 packets per second (pps). Multicasts and unicasts are limited to 75,000 pps. This may seem high, but storm control works by taking snapshots of the pps every second. Because of the way packets are scheduled into the switch there can be some skew as to how high the pps ratings actually are. We have seen instances where the systems don't generate rates higher than 50k pps, but for one second the switch thinks it is around 90k pps. Unfortunately there is currently no way to tune the interval used; we would like to extend that out to 5 or even 10 seconds because we are concerned with a sustained burst especially in the broadcasts. As a potential problem, we have found that the multicast limits are regularly triggered by Ghost or other imaging (as well as for unicast if you happen to use that for imaging). Yet another issue involves an on-line VM migration. When migrating the system memory, this also spits out a very high number of pps which can look like a storm. Chris emphasized that, as with Port Security, Storm Control would be focused on edge devices rather than servers. When limits are exceeded, a 15 minute port shut-down is triggered. The shutdown duration can be tuned, but only on a switch-wide basis--not per port. DHCP Snooping -- to protect against rogue DHCP serversDHCP snooping is a general port security mechanism which is not technologically related to Cisco Port Security. Currently as a default on all wallplate buildings, we talk to the administrators to find out which ports may have DHCP servers and set those to "trusted"; all other ports are "non-trusted". This means if somebody improperly attaches an inexpensive non-managed router or access point to the network and turns on its DHCP server, that port will be blocked and it won't be able to affect the rest of the network by handing out incorrect addresses. This generally is a problem these days primarily in the fraternities and sororities. Since we have the fraternities and sororities segregated to their own Bluesocket boxes, the scope is limited; however, an incident would still affect all Greek houses and the exact location of the problem can be very difficult to trace. DHCP Snooping will be highly effective. The only problem with this which we have run into is when someone has moved a DHCP server and not told them. So there is a bit of a challenge in maintaining the documentation for which ports are DHCP server ports and making sure the subnet admin notifies us if servers are moved. 802.1x -- to provide authentication for network connectionCNS has been testing 802.1x for about a year now. 802.1x is a technology by which your system authenticates itself to the network. We currently have not implemented this, but our implementation would be based on Gatorlink. Before your system would be allowed to access the network, you would have to authenticate via your credentials. There are several advantages to this. It would be supported under either wireless or wired. Also, if your were to do native 802.1x authentication under wireless, you would bypass the Bluesocket device--which we hope might increase the longevity of our Bluesocket devices.
Craig raised an issue they have at HSC with their 802.1x wireless network. Unless UFAD-joined Windows machines have third-party 802.1x supplicant software, the workstation can't connect back to the domain until after the 802.1x network connection is formed. This could potentially be a very difficult and/or expensive problem to solve. Chris Griffin responded that there is a solution for avoiding a third-party supplicant, which he has been successful with on wired but is yet to try via wireless (though he understands that it is supported). You can use what is called a guest-VLAN. If the switch does not hear a supplicant interrogation within a certain period of time, it can switch you onto a guest VLAN, which would basically be the Bluesocket. So if you plug into what is supposed to be an authenticated port, but your system does not interrogate the network as being an 802.1x capable device, it will get the Bluesocket signon screen. Unfortunately, that does not solve the problem of performing 801.x authentication prior to logon for Windows machines joined to UFAD. What is needed is something along the lines of the Windows Cisco VPN client's "Start VPN before logon". Chris Griffin responded that they are still working on that and other authentication issues. One thing they have looked at is the ability for automated pushes of patches to systems which may not be authenticated to the network. They will finish up their lab tests and then probably bring in a test group with live machines to identify and hopefully solve the various issues which arise. Craig asked if they are looking at providing vulnerability mitigation for connecting clients. This would entail a step beyond vulnerability assessment and could potentially include either self- or auto-remediation methods. Chris responded that he was aware the security group was interested in certain aspects of that, but that this was not a part of their basic 802.1x planning. Network Protection SummaryDan Miller stated that this was just a preliminary overview of emerging technologies and that we would be revisiting these issues at future meetings--possibly in a couple of months or more. CNS is still learning about these and other solutions and vendors continue to develop these and provide fixes for various issues. Dan Miller polled the rest of the committee to see what experience they might have with such technologies. Tom mentioned that HealthNet does BPDU blocking and does not allow "hublets"--they confiscate them when they find them. They are also using Loop Guard, but not 802.1x. Most of what HealthNet does in this regard involves scanning and talking to individuals. Tom said that, while one can try to protect the network, there is really no way to do that completely. It has to involve education as well. He imagined that this was a much simpler job for HealthNet than it would be for UF Net Services across over 700 buildings. Charles Benjamin shared the fact that Housing is using 802.1x on both wired and wireless. He described that as "an adventure". Preview of UF branding projectChristine Schoaff had provided Dan Miller a handout on a proposed UFL.EDU-to-UF.EDU migration project. It was decided at a very high level that we ought to move from UFL.EDU to UF.EDU as a branding issue. The project outline is being pushed out to all the technical groups on campus for feedback on exactly what is required to do this. Marcus noted that this is a "perilous path" and that we should approach the matter will all due caution. He mentioned that he intends to respond to Christine that even much smaller domain changes than this can cause a lot of trouble. The proposed project would be extremely difficult to manage due to the span of time over which UFL.EDU has become imbedded into the fabric of UF's business processes. Marcus recognized that this is a PR and branding issue, and he felt that there might be some specific actions which could be done to assist things in this regard. He felt, however, that attempting an overall migration from UFL.EDU to UF.EDU would be "suicidal". Erik suggested that we might add a virtual layer around portions of our network presence so that outside people might reach various resources via the UF.EDU specification. He felt that making a fundamental change at the UF level would not be feasible, however. Tim pointed out that it is not Christine that is proposing this project; rather, she has been directed to come up with an evaluation of how to get from here to there. Tim believed that people are thinking of having the two addresses coexist for many years. Tim also thought that the initial question was whether it could be arranged that if someone referred to UF.EDU (via e-mail, web, or whatever) that there would be a translation method that would take that to UFL.EDU. Tim mentioned that he was told much of that could be done very easily and that some of it couldn't; it would be helpful to identify the tough cases and come up with some approach for them. Craig and Shawn both agreed that simple distributed DNS changes could handle much of that via adding whatever.uf.edu as another alias to each of the various DNS authorities for the various domains across campus. That portion is easy, but Craig felt it important to make sure changes stay that easy. Note: Christine had shared with Steve Lasley after the meeting via e-mail that she would put it on their list to add this project to the http://www.webadmin.ufl.edu/projects page. She mentioned hoping that there will be many folks who contribute to the document and end up as authors. She also stated that this project will need input from and collaboration among many, many groups at UF. Finally, she wanted people to know that if there's a more central web location where people would like to see this project tracked, that would be fine with her. Action ItemsNext MeetingThe next regular meeting is tentatively scheduled for Thursday, November 8th. |
last edited 18 October 2007 by Steve Lasley