ICC Home  /  Members  /  Meetings  /  Peer Support  /  Documentation  /  Projects


Minutes of October 8th, 2009 ITAC-NI Meeting:


back to ITAC-NI minutes index

    Link to ACTION ITEMS from meeting

    AGENDA:

    1. CNS Wall-Plate update
    2. Approve prior minutes
    3. ITAC-NI committee membership, status and governance
    4. New Data Center
    5. IPv6 update
    6. CNS wireless update

    CALL TO ORDER:

    This meeting was scheduled in CSE E507 at 1:30 pm on Thursday, October 8th and was made available via videoconference with live-streaming and recording for future playback. Prior announcement via the Net-Managers-L list was overlooked this month. The meeting was called to order by ITAC-NI chairman, Dan Miller, Network Coordinator of CNS Network Services.

    ATTENDEES: Thirteen people attended this meeting locally. There were no attendees via Polycom videoconference and no records of how many may have listened into the stream via a web browser using the web interface.

    Ten members were present: Charles Benjamin, Dan Cromer, Erik Deumens, Margaret Fields, Tim Fitzpatrick, Shawn Lander, Steve Lasley, Tom Livoti, Dan Miller, and Handsford (Ty) Tyler.

    Four members were absent: Clint Collins, Craig Gorme, Stephen Kostewicz, and Chris Leopold.

    Three visitors participated as well: Jeff Capehart, Todd Hester, and John Madey.


    Viewing the recording

    You may view the recording via the web at http://128.227.156.84:7734. Currently, you will need to click on the "Top-level folder" link, then the "watch" link next to the "ITAC-NI Meeting_08Oct09_13.15" item. This will likely be moved into the ITAC-NI folder shortly. Cross-platform access may not be available; on the Windows platform you will have to install the Codian codec.

    Audio archive

    An archive of audio from the meeting is available.


    1) CNS Wall-Plate update

    Our last Wall-plate update was at our May meeting. Todd Hester was on-hand to provide another update, but had to leave early. Consequently, this item was moved to the top of the agenda.

    1-1) Wall-Plate status report

    Status at June 30, 2009

    At the end of the last fiscal year, data port counts were at 21,933 and VoIP telephone counts were at 4,226. 433 problem tickets were handled along with 1,140 requests for changes. Additionally, 226 emergency speaker phones were added to the system.

    July 1st to October 1st

    Data port counts are now at 23,403 and VoIP telephone counts are at 4,800. Norman Hall was just completed; that building included 1,425 data ports and 349 VoIP phones. Physics, New Engineering and Turlington were all completed with each including over 1000 ports.

    Active projects

    There are 26 active projects currently covering 8 buildings. Four of those are in the final closeout phase, six are in the deployment phase, seven are in the procurement phase, and nine are in the initial design phase.

    1-2) Observations on progress

    CNS is basically on target

    Todd noted that with 23,000+ active ports CNS is just a tad shy of the original 25,000 port estimate upon which their recurring central funding was based. They are projecting a final data port count of between 30-35,000 at completion. Projections for the end of this fiscal year are 30,291 data ports and over 6,500 VoIP phones. CNS has managed to figure out ways to do more with less and consequently their budget situation continues to look good.

    Rollout will continue a bit beyond this fiscal year

    Tim Fitzpatrick commented that we began this centrally funded project with approximately 10,000 ports in the old pay-per-port scheme. What was originally envisioned as a three-year project is now expected to spill-over somewhat into the next fiscal year. The majority of the next fiscal year will be dedicated to beginning life-cycle replacements of the electronics, however.

    Some units have deferred to later

    CNS had expected to begin refresh replacements of our oldest deployments on July 1st, 2010 but overruns can be attributed in part to the fact that a number of places deferred to a later date when approached about opting into Wall-Plate. John Madey stated that some locations, like Weil Hall, simply were not ready when first approached, but nobody has flat-out declined. It generally gets more challenging when dealing with buildings which house multiple departments.

    Situation at Fifield

    John Madey mentioned that they had just met with the departments in Fifield Hall. Dan Cromer said that the only question there is who will pay for the work; he hadn't heard back from Dr. Joyce on that yet. Tim responded that CNS is willing to work on the cost aspect and suggested that a three-way split between the departments, IFAS administration and CNS might be negotiated. CNS is very interested in doing what they can to make this work.

    1-3) Questions

    Has there been progress with unified communications and integration with Microsoft Office Communications Server? (Dan Cromer)

    CallManager upgrade must come first

    Dan Cromer mentioned that he would appreciate and update on this matter if any information was available. John Madey responded that they had met with the UF Exchange group a couple of times, but that any real movement on this will probably have to await their planned CallManager Version 7.0 upgrade which is set for January of 2010. They do plan to do a pilot test prior, however.

    Other cost and technical challenges await

    John said the challenge with utilizing OCS with unified communications at remote sites would be the requirement for local mediation servers at each site. The other way to accomplish this is to use a Cisco "plug-in" called Cisco UC Integration for Microsoft Communicator; that way all the call handling is done the way it is currently.

    What is the status of routing inter-campus VoIP traffic over the network rather than phone lines? (Erik Deumens)

    Not for calls between different state universities

    Erik Deumens asked if further progress had been made on this. Tim Fitzpatrick responded that there has been no progress at all on that; it is not something which they anticipate doing in any foreseeable timeframe.

    Possibly for calls within UF but across the state

    John Madey elaborated that there are some IFAS units utilizing CallManager Express (Ft. Lauderdale REC and Citrus REC in Lake Alfred among those) for which we are looking into doing that, however. That would be intra-campus, however, which makes things considerably easier.

    This isn't cost effective for HealthNet

    Ty mentioned that Health Sciences looked integrating VoIP with campus using a gatekeeper. It turned out that the huge majority of their calls are within their own complex and they would not see cost savings that would justify further investigation.

    The IFAS situation is unique

    Dan Cromer said that the situation within IFAS is different and IFAS continues to look at ways to save on long distance charges among its various campuses across the state. Charles Benjamin stated that this can be handled via an addition to the routers of a Cisco product called CUBE. This allows one to connect a voice gateway to SIP trunks.

    CNS will continue to investigate

    Tom Livoti mentioned that CallManager Express can do this as well, but Dan Miller responded that CNS was more comfortable with looking at the higher-end Integrated Services Routers (along the lines Charles had mentioned) because of issues of scale. That said, a couple of the IFAS site do have CallManager Express and CNS will continue to look into the options.


    2) Approve prior minutes

    No corrections or additions were offered and the minutes were approved without further comment.


    3) ITAC-NI committee membership, status and governance

    3-1) ITAC-NI committee membership change

    Dan Miller noted that Bernard Mair has left the committee and is being replaced by Margaret Fields whose responsibilities include CLASnet.

    3-2) ITAC-NI-L listserv subscription changes

    Dan Miller also mentioned that Sumi Helal had unsubscribed from the list. Dr. Helal does extensive research in wireless and we are sorry to see him go.

    Steve Smittle of the UAA also requested to be removed from our list as he is now focusing more closely on security.

    3-3) Governance

    Dan Miller reported that to-date there has been nothing definite handed down regarding the disposition of this committee. There are discussions going on for what would be our new parent committee, IT Infrastructure. We will have to wait and see how that all plays out, but for now Dan assumes we should proceed as usual with whatever topics we come up with. At some point in the near future we may get a charge from a new committee.


    4) New Data Center

    Dan Miller then turned things over to Tim Fitzpatrick for discussion on the plans for a new data center.

    4-1) Data Center no longer tied to new office building

    Tim stated that many of us are already aware that on the Eastside Campus where Bridges is currently located today there is a new office building which is under construction scheduled for completion next May. Originally the Data Center was planned to piggy-back on this new office building. For a variety of reasons, those two projects split--mostly so that the office building could remain on-schedule and on-budget.

    They are calling the new data center the "ECDC" (East Campus Data Center) to go along with the SSRB and CSE on-campus data centers.

    4-2) Data Center to fulfill three part purpose

    The new data center has a three part purpose, one being to provide additional space for enterprise systems such as Student Records, Human Resources, Finance, eLearning, etc. This is not merely additional space, however; it is a second site for providing fail-over and/or disaster recovery (DR). Improving our DR/fail-over capability was probably the primary rationale in making this move.

    The second purpose was to create additional space for High Performance Computing. The third purpose was to continue to encourage college units to remove the numerous distributed servers which have proliferated around campus and to relocate those services into data centers. Relocation is really a step along the path to virtualizing. What CNS really wants to provide is not "park your box in our garage" but rather to transfer the various platforms to a shared platform where cost efficiencies may be enjoyed.

    4-3) Timeline slippage

    Originally the timeline was wishfully tied to the new office building. When the projects were separated they then hoped for an August or October of 2010 completion date. They then went through the process to determine what various contractors might have to offer with regards to this project. All came back saying that the timeline and budget were both very tight but that they could work with it. What is done at that point is one firm is picked and a contract is negotiated based on terms and conditions. Those negotiations have now been progressing for the last six weeks by Facilities Planning and Construction. Word was just received today that they are zeroing in on March 2011. That is the schedule surprise and Tim can "hardly wait" to hear the budget surprise.

    Tim believes this new data center is needed by the university and that over time it will deploy well. However, at present it involves quite a scramble both in terms of scheduling and budgeting.

    4-4) Associated technical challenges

    This project also presents some very challenging "architectural" challenges regarding technical considerations for distributing our various services across the data centers. Are we going to divide things up in a way which depends on the sites being only 10 miles apart, or are we going to divide things up so that hypothetically we could have a separate site virtually anywhere out there in the cloud? There are mixed opinions about that. The Sakai platform for eLearning which is coming soon has been designed so that the mirroring and failover is IP based and that data center could be anywhere. ERP, on the other hand, has already blazed the trail of doing data mirroring via fiber channel.

    4-5) Hosting services are available right now

    CNS is currently offering hosting. In theory CNS will have more space as some things are moved off-campus, but Tim wanted folks to know there is no need to wait. They can offer hosting services right now.

    4-6) Questions

    Will co-location be done at the new data center? (Handsford Tyler)

    Tim responded that he foresees most of the co-location for colleges and departments will be done on-campus--probably mostly in the CSE Building. Tim feels this would be more convenient for most.

    Will CNS offices be relocated from SSRB? (Charles Benjamin)

    Tim said that early negotiations between Chuck Frazier and Ed Poppell involved an understanding that part of the deal would involve moving the majority of CNS offices out of prime academic space in SSRB. Consequently, only Dan Miller's Network Services group is planning to remain in SSRB after the new data center is complete. CNS does have some people in Yon Hall and what they call the 508 Building, and Academic Technologies in the Hub across the street; so there are a number of CNS presences on campus.

    Will the CSE data center be available much sooner than the ECDC? (Jeff Capehart)

    Tim responded that CNS had previously decided to beef-up the capabilities the CSE data center as an earlier means of addressing DR/failover concerns. The thought was that learning how to distribute services "across the street" might be the first step towards a more robust off-site failover/DR plan down the road. That facility now has a generator and the full complement of AC and UPS, etc. That is now a solid data center and we could locate enterprise systems there. CNS is going to build the new Sakai infrastructure half in SSRB and half in CSE. That will be in production by next summer/fall. The CSE components will later be relocated to the ECDC once that is ready.

    CSE renovations are nearly complete. A brick enclosure for the power generator is being worked on currently, but they expect the project to be completed pretty soon now.

    Has there been any effort expended to make CNS services HIPAA compliant? (Handsford Tyler)

    Tim said that one or two Health Science units have approached CNS about hosting or co-location. Jan Van der Aa and Coleen Ebel have talked to Tim and Dr. Frazier about the security considerations which would be necessary.

    Ty responded that there is a difference between saying that we will support HIPAA requirements for a particular set of hosted or co-located equipment and saying that CNS services will be made HIPAA compliant by default.

    Tim said that he intended the two cases mentioned to serve as his introduction to what meeting HIPAA requirements might mean. CNS is on the path, but only time will tell how that all sorts out.

    DHnet's failover plan involves three locations. Has CNS considered that value of utilizing such a "three-legged" network configuration? (Charles Benjamin)

    Dan Miller responded that he believed Charles was referring to the fact that a third network location is needed for monitoring and controlling failover between two redundant locations. The CSE data center will remain and Centrex offers yet another location, but network systems is still very much in the planning stages on how to handle things down-the-road. CSE and SSRB are going to be kept as essentially a single data center where VLANs are shared. That is for the convenience of deploying services and they think of CSE as almost a "third floor" on the SSRB data center. The ECDC will be separate in order to reduce combined failure domains.

    Tim pointed out that funding considerations come into play as well. When they were planning the new data center they had to look at Tier 3 vs. Tier one services, which services required which levels, and how much space would be needed for each. Tier 3 costs are about double that of Tier 1. Connecting the new data center will require at least a 10 Gbps connection, but you can't have only one; you must have some kind of redundancy. So the options are a second 10 Gbps dedicated circuit or upgrade your 1 Gbps lease to a 10 Gbps lease. Two dedicated 10 Gbps connections would run about $1 million!

    Ty said that the Health Center's redundancy plans are basically barebones because of that very cost issue. They utilize the NWRDC in Tallahassee, which just happens to have the FLR POP in the same room.

    4-7) Comments about networking considerations for the new data center

    Dan Miller said that they started with duplicating what they have today, which are Cisco 3509 switch/routers with the load balancing server modules and firewall services modules. In a recent conversation with Cisco they discovered that the new Nexus class of switches should actually be cheaper for the density of 10 Gbps ports being considered. Consequently, that may be the direction taken. That would position things well for Fiber Channel over Ethernet (FCoE) and other future technologies which Cisco is developing as well.

    They are also considering whether or not they should move one of the two redundant internet routers out to the ECDC. There are a lot of variables which are still under discussion.


    5) IPv6 update

    This topic was last discussed at our May meeting.

    5-1) Core rollout complete

    Dan Miller reported that they are stamping the project which they had started a few months ago as complete. IPv6 is now running on the core routers. They are looking at further enhancements and are awaiting a new code level for the core which will work with all the modules. That is expected to provide much better performance, but currently performance is not a consideration due to the lack of IPv6 traffic.

    5-2) Focus now moves to DNS and the host side

    The other initiative which needs to begin soon is on the host side. They need to set up some test islands of IPv6 and they need to get some people to install some servers. Marcus Morgan is working on DNS configurations that will support IPv6 as well as DNS security. On the DNS-side, that should be ready in a few months. The next step will entail a meeting between Network Services and Security to talk about monitoring tool requirements. Dan said that he will update the committee as the plans mature.

    5-3) Partitioning scheme yet to be determined

    Tom Livoti asked about how the address space might be partitioned and managed across the various units. Dan Miller responded that this is yet to be determined but we can likely expect similar segmentations to what we have today. Dan Miller expects to discuss this issue further with the committee in perhaps six months.

    Tom suggested that CNS might need to be proactive in beginning discussions regarding allocation.

    5-4) Offer for hosting web server on IPv6

    Charles Benjamin reiterated his willingness to host a web server on IPv6 at DHnet. He was curious as to who would contact him about doing that and when. Dan Miller responded that either Marcus Morgan or himself would be the ones to do that. They will have to get past the DNS and security concerns prior, however.

    5-5) When will IPv4 space be exhausted?

    Jeff Capehart asked about current projections for when the IPv4 address space would be exhausted. Dan Miller responded that it is indeed coming, though estimates vary. Dan feels our primary concern is our own address space; we seem to be doing okay in that regard with the two Class B's that we have. We do need to begin taking concrete steps to move forward, however.


    6) CNS wireless update

    Wireless was last discussed at our April meeting in the context of Voice over WiFi and high density WiFi locations.

    6-1) Wireless infrastructure now centralized apart from routing network

    Dan Miller reported that CNS has been spending considerable time and money beefing up our wireless support. They now have installed two dedicated 6509 chassis just for wireless. Because of the complexity and the number of changes which were going on in the wireless arena, they didn't want to have that overlaid on top of the routing network. Consequently, they have moved away from WiSM blades in the routers and have these two new chassis housed on a separate network physically within both SSRB and CSE with redundant core connections. That is also where CNS has their central NAC servers.

    6-2) High-speed connectivity and redundancy implemented with room for growth

    Connections for this are all 10 Gbps. There is room to grow within the chassis and they currently have eight WiSM controllers supporting up to 1200 APs in a redundant configuration. If one whole site goes away they will still have complete service for those 1200 APs. They currently have about 850 APs on the UF side. Most of those have now been converted to the light-weight code and are being controlled by the WiSM modules.

    6-3) Now supporting "B", "G", "A", and "N" radios

    The remaining APs running in stand-alone mode should be converted by the end of the semester. CNS has been installed 802.11n APs for awhile now and are just now completing a project to upgrade all the APs that were 802.11b only. They are utilizing both and 2.4 GHz and 5 GHz frequencies, so"B", "G", "A", and "N" radios are all supported. They are going to start a project to buy some more A antennas to finish out the rest of the old APs for which "A" was not originally installed.

    6-4) Pilot project in Marston Science Library

    CNS has also undertaken a small pilot project in the Marston Science Library to increase the density of APs there. Usually such problems are a coverage rather than a density issue, but such was not the case in Marston. Seven more APs were added to this location and they believe the students should now be getting a better laptop experience there. They did not decide to build out to Cisco specifications a full wireless VoIP deployment there; that would be very costly due to the densities required.

    6-5) HealthNet piloting dense wireless solution from Maru Networks

    Ty mentioned that they have had some density issues in their auditorium-style classrooms in the Communicore where instructors want everyone to be connected during lectures--up to 135 simultaneous connections. They are piloting a solution from Maru Networks that involves combining physical APs into a seamless virtual pool. Although there are multiple APs, each laptop sees the pool as a single virtual AP. This removes issues of a device needing to bind to a particular AP; connections are actually handled and distributed from the AP pool outward to the devices. This system also supposedly eliminates a "B" connection degrading other connections by providing each device a metered time-slice. The company also says this will all work with NAC, so they are anxious to see the results.

    In the traditional situation, a device must bind tightly to a single AP; this leads to the frustrating case where you have a very weak-signaled connection to a distant AP even though you are much closer to some other AP. Without that tight binding, however, you would flip-flop when midway between two APs. Maru's solution purports to overcome this problem.

    6-6) Wireless update from DHnet

    Phase Two implementation in progress

    Charles reported that they are currently on Phase Two of their wireless deployment. They have three WiSMs; once they enter Phase Three that will be upped to four WiSMs. The buildings which were part of the completed Phase One (Beaty, Broward, Rawlings, Yulee, Reid, Mallory and Hume) involved deployment of Cisco Aironet 1252 APs. Phase two includes Weaver, Riker, Tolbert, North, East, Graham, Simpson, Trusler and Springs. They expect Phase Two to be completed by the beginning of next year.

    Sticking with 1250 series for availability reasons

    Tom asked if Charles had any plans for moving to the Cisco Aironet 1142 APs. Charles said he had looked at the 1140's but the ship dates were too far out; consequently they went with what they had been using in the past.

    Using an SSID of "DHW" with 802.1x authentication -- but no "A" radios

    Charles was asked about the SSID for those, and he replied that it is DHW rather than UFW. He added that they are using 802.1x with authentication. They do not, however, utilize the 5 GHz radio because they are not supporting the A. That could be added at a later date, however, as the units will support it.

    6-7) Questions and comments

    Any plans for improving wireless coverage across campus? (Dan Cromer)

    Dan Cromer asked whether there were any plans for filling in holes in the wireless coverage. Dan Miller responded that they intend to address that mainly via refresh cycles. The standards for coverage are generally upgraded over time and a refresh will bring a building up to current standards. Tim added that if you plot things on a map, the great majority of our wireless coverage is on the upper-right quadrant. They do believe that broader coverage is possible and should be implemented. The currently thinking is that this might be a project of interest to students via the technology fee.

    The consumer trend is toward increasing wireless usage (Charles Benjamin)

    Charles mentioned that they support a wired connection to each "pillow". After implementing phase one of their wireless, three weeks into the new term just as many people had authenticated via wireless as had on wired. That just shows the trend in preference from the consumer side.

    Does the campus wireless structure support mobility? (Jeff Capehart)

    Jeff Capehart asked about support for mobility from one AP to another. Dan Miller said that the WiSM controllers have improved that support; they hope and expect mobile connections to work without dropping and requiring reconnection.


    7) Other discussion

    7-1) Telepresence

    Is CNS investigation any proposals regarding telepresence for campus?

    Charles Benjamin apologized for not getting this on the agenda prior and suggested that perhaps Dave Pokorney could be here next time to address the matter. He was wondering, however, if any proposals are currently being investigated with regard to telepresence for campus. Tim responded that Cisco has been knocking on many doors pushing telepresence. He was aware of an approach on the matter to FLR but had not heard any details. Tim suggested that Dave would indeed be the person to speak on such matters.

    Telepresence is expensive

    Dan Cromer mentioned that meeting the technical definition of "telepresence" can be extremely expensive. This room here is the high end of realistic possibility for UF as far as he is concerned.

    Videoconferencing support already widely supported via Academic Technologies

    Charles suggested that having CNS supply telepresence as a service would seem to be an extremely useful and attractive proposition down-the-road. Ty responded that UF has many videoconferencing nodes across campus already. Dan Cromer added that IFAS already has many such sites all across the state and he is in the process of negotiating with Fedro Zazueta and Jan Van der Aa for an upgrade of AT's bridging capabilities.


    Action Items

    1. Arrange for Dave Pokorney to speak regarding "telepresence".

     


    Next Meeting

    November 12, 2009


last edited 12 October 2009 by Steve Lasley