ICC Home  /  Members  /  Meetings  /  Peer Support  /  Documentation  /  Projects


Minutes of February 14, 2008 ITAC-NI Meeting:


back to ITAC-NI minutes index

    Link to ACTION ITEMS from meeting

    AGENDA:

    1. Approve prior minutes
    2. Review the revised UF IT Risk Assessment Standard from ITAC-ISM
    3. Review of HPC's use of the research and production networks, and review and vote to endorse the new CRN management standard
    4. Final discussion and vote on recommendation to not allow external network devices in Wall-Plate buildings

    CALL TO ORDER:

    This meeting was held in CSE E507 at 1:00 pm on Thursday, February 14th and was made available via videoconference with live-streaming and recording for future playback. Prior announcement was made via the Net-Managers-L list (late afternoon of the day prior). The meeting was called to order by ITAC-NI chairman, Dan Miller, Network Coordinator of CNS Network Services.

    ATTENDEES: Sixteen people attended this meeting locally. There was one attendee via Polycom videoconference but there are no records of how many may have listened into the stream via a web browser using the web interface.

    Eleven members were present: Charles Benjamin, Dan Cromer, Erik Deumens, Tim Fitzpatrick, Craig Gorme, Steve Lasley, Chris Leopold, Tom Livoti, Bernard Mair, Dan Miller, and Handsford (Ty) Tyler.

    Three members were absent: Clint Collins, Stephen Kostewicz and Shawn Lander.

    Five visitors were present as well: Stan Anders, Kathy Bergsma, Dennis Brown (via Polycom), Todd Hester and Tim Nance.


    Viewing the recording

    You may view the recording via the web at http://128.227.156.84:7734. You will need to click on the "Top-level folder" link, then the "watch" link next to the "ITAC-NI Meeting 2/14/08" item. Cross-platform access may not be available; on the Windows platform you will have to install the Codian codec.

    Audio archive

    An archive of audio from the meeting is available. Steve continues to have problems remembering to start the recorder on time; so, unfortunately, the audio begins roughly at the start of agenda item 3.


    1) Approve prior minutes

    No corrections or additions were offered and the minutes were approved without further comment.


    2) Review the revised UF IT Risk Assessment Standard from ITAC-ISM

    2-1) Background

    The ITAC-ISM committee had provided a newly revised UF IT Risk Assessment Standard for our review and Kathy Bergsma was on hand to present a summary of that document and answer any questions we might have. Kathy mentioned that UF has had a risk assessment standard for five or six years and asked if anyone here had done a risk assessment yet. Dan Cromer responded that IFAS was waiting on the standard. Kathy said that this was exactly what she has been hearing for the auditors.

    Kathy then related that, while this committee has likely reviewed this before, a few changes have been made to the standard now and an application called Achilles has been written to assist units in developing their individual risk assessment.

    2-2) Summary of the standard

    The standard says that the Level 2 Unit Informational Security Administrator (ISA) must see to it that risk assessments are performed at least once every five years and that the resulting report must be submitted to the UF ISM (Kathy). From those, Kathy must create a UF-wide mitigation strategy. Kathy said that she does not want risk mitigation strategy reports to come from each end unit (as that would be too large of a task for her to assemble), but rather just from the Level 2 ISAs. In addition, a status report should be submitted yearly summarizing the progress which has been made on the mitigations which your last strategy report outlined.

    2-3) The next steps

    We still need the CIO to approve and implement this standard which will be advertized via DDD memo--hopefully in March. Training will follow and Kathy has already begun developing that, having scheduled training on Achilles.

    2-4) Achilles and training

    Achilles is a web tool they have written that helps with the risk assessment. It doesn't write your mitigation strategy for you, but it will help you digest all the vulnerabilities, threats and assets and get the perspective of users, IT workers and administrators in your unit. Training for that begins on the 17th; Kathy has five classes scheduled over the last two weeks of March. Registration is required and this matter has been advertized via the Network-Managers list.

    Steve asked about who would be candidates for taking these training classes and Kathy clarified that you must be in the Net-managers contact database in order to use Achilles. Because there could be sensitive data, we have to authorize people.

    2-5) Questions and answers

    Craig asked if the HSC was doing their own thing and Kathy said yes. HSC still has to submit a mitigation strategy, but they don't have to use Achilles. In fact, no one has to use Achilles as Kathy doesn't care how units go about developing their reports--only that they do so. Achilles was created, however, as a tool to help with doing that.

    Ty said that HSC was compliant via item 2 of the summary. He mentioned that he believed Colleen Ebel is going to include some mention of FISMA (the Federal Information Security Management Act). Kathy had a conversation with her about that last week and Kathy understands that the HSC concern is that some units work with the Veterans Administration. Colleen believes there may eventually be concerns for anyone who gets NIH funding, so FISMA may trickle down to other campus units at some point--and they are pretty strict standards.

    Charles asked Kathy if she stores the submissions for five years. Kathy said that all previous paper submissions have been kept on file. They plan to have on-line submission for the assessments via a web-form and that those will be stored on-line.

    Note: in addition to this one, there are a number of security policies under review currently


    3) Review of HPC's use of the research and production networks, and review and vote to endorse the new CRN management standard

    Dan Miller introduced Erik Deumens who was prepared to talk about UF's High-Performance Computing Center (HPC Center) and the Campus Research Network (CRN). Dan noted that Chris Griffin could not make it today because he is sick.

    3-1) Introduction

    Erik said that he would talk about two things. One was the HPC Center and the other was the Campus Research Network. These are really two separate things although they are closely related.

    3-2) The HPC Center

    The HPC Center is an effort that was started by the then interim-CIO Chuck Frazier and he created the ITAC-HPC committee (as it is called now, previously the HPC committee) to analyze UF's upcoming HPC needs and to come up with some plan to deal with it. That committee came up with a three phase plan to implement a comprehensive campus-wide strategy for dealing with HPC needs.

    While computing power has become cheaper, more commodity-based and therefore simpler to manage in some aspects, the demand for high-performance computing has also increased dramatically so that large numbers of people who really have no intrinsic interest in learning the details of HPC want to use it. The higher density of all those pieces of equipment made it no longer possible to get a grant for $100,000 (a fairly typical size), buy some computer equipment, put it in the back of your office or in some room next door and then expect that it would be functional. The density of this equipment is just too high, there are more details involved, and it becomes very expensive to manage this distributed anarchist structure which has been existing at UF since 1985 or thereabouts.

    A concerted effort to coordinate things started within the College of Liberal Arts and Sciences. In 2004-2005 Paul Avery put in some money which was matched by CLAS and by the CIO's office to develop our phase one cluster. Then the College of Engineering got another chunk of money which became operational as phase two in 2006. At the end of 2006, some more engineering investors appeared and we expanded further, coming on-line in January of 2007. Now we have 1600 CPUs for this HPC cluster with 32TB of data.

    This Fall, Erik has been working with many people in the Health Center and in IFAS to try and build phase three which involves the next expansion of the center and its user community. The current cluster is kept 95% busy all the time. It has 250-300 active users running jobs, but that fluctuates. We have researchers who run jobs for several months and then slow their activity as they digest the data, but then some other group takes over. This facility is now providing a very important service to the community which would be greatly inconvenienced were it to be turned off.

    3-3) The Campus Research Network

    In 2004, an NSF MRI grant was funded. It allowed the creation of a fast network which was originally called the MRI network and is now called the Campus Research Network (CRN). This network connects several computer rooms that have HPC equipment within them to the machine room in SSRB where it connects to the FLR.

    One of the things which has happened, which is why we have this document today, is that we wanted to connect the labs of the investors in the Health Center to this Campus Research Network. The group we are talking about is ICBR, who are themselves a service organization which provides a lot of computational resources to the Health Center and in IFAS. They have bought a new instrument which needs a massive amount of extra computing, so collaboration with the HPC was natural. Consequently, we wanted to have a more formal document which provided some guidelines and described how the CRN would be expanded in the future. That is what this document does.

    3-3-1) An orthogonal network

    Erik then turned to describing the management standards document for CRN. One thing Erik wanted to make clear from the very beginning is that this network is orthogonal to the standard network and has been created for the very specific purpose of supporting research needs which cannot be satisfied on the standard network. If you have a research project and it is perfectly possible to utilize the existing network, then that is what you should do. However, a particular research project may require special network protocols, bandwidth, latency, etc. which our main network cannot easily or safely provide. There may come a time when new protocols are invented that would permit something like a vLAN or tunneling to allow experimental protocols to be safely carried on the standard network, and if that time comes, then CRN will no longer be needed for that purpose. Erik wants it clear that they are not trying to create a separate parallel network that will compete with the standard delivery of network services.

    3-3-2) CRN Governance

    Given that, there are specific needs which we want to address and the HPC committee is basically the "board of directors" for that organization. If you have any needs which you think the HPC center could assist with, you should contact Erik as the Director of the HPC Center and he will raise it to all the committees and organizations that would need to approve that as documented.

    3-3-3) CRN connections

    The last page of the management document provides a specific list of all the machine rooms which are currently connected. It is part of the philosophy that the CRN is for very specific needs. We do not intend to and will not connect general buildings, rooms, classrooms or offices. It is only very specific machine rooms containing specific equipment that will be approved on an individual basis.

    3-3-4) Restricted data

    This raises another issue which is not a problem at this point but which may become so in the future; that is the matter of restricted data. Currently all the data on the CRN is unrestricted, including the ICBR data. Should some research effort that does deal with restricted data wish to connect to the CRN in the future, then they will have to come up with a proposal, first of why they need to be connected and secondly how they intend to deal with their data in this environment. That will then be looked at by a committee and access will be granted or denied based on those details.

    3-4) Questions and answers

    After providing this brief overview of the HPC Center and the CRN, Erik solicited questions.

    Is this network physically separate? (Charles Benjamin)

    Charles asked if there was any connection over the network between CNS and the HPC Center, or whether it was all physically separate. Erik responded that there is a separate set of fiber and switches for those connections. The only place they converge is at a single switch in the SSRB where the commodity campus network, the Florida Lambdarail, the Cox Communications connection and the CRN all come together.

    Is the project on schedule? (Dan Cromer)

    Dan Cromer asked if the project was on schedule to complete connections in the timeframes listed within items 8 and 9 of the last page of the document. Erik said that they have been working on connecting Weil Hall all summer and that the fiber has been pulled and the connections are in place ready for use. In the case of ICBR they have been working since December and that will be ready soon.

    How is network security and isolation implemented? (Charles Benjamin)

    Charles asked if he could assume that ACLs on the central switch protect the rest of our network from any interference by the CRN. Erik responded by describing the standard design using as an example Paul Avery's project in the Physics building with the Open Science Grid.

    That project collaborates with the CERN Large Hadron Collider. That collaboration is part of the project commitment and they must supply access to the CMS jobs. There is this world-wide distributed file system and so some of their nodes have to be on public IP addresses. One cluster is completely open and it has its own kinds of security measures which they use to make sure they don't get hacked.

    Then there are other centers like the HPC Center that has only two hosts with public IPs: the web page server and the main submit node. Everything else is on private IP. This 10.13 private network can access all of campus and all of campus can access it. When one of these nodes wants to access the outside world there is a NAT protocol at the boundary. From the outside of UF, however, you can't get to any of those.

    All ACLs are investigated to see that they meet the needs of the research being done. The HPC is used by researchers on campus and the only people with actual physical access to the network are system administrators. Although a large number of people use the facility, it is not like the Wall-Plate where you have to take into account that some faculty member might plug something in accidently or in some unauthorized manner. There are no plugs accessible to the general UF public. It is only system administrators in machine rooms that can make a connection. There are, of course, a large number of graduate students and others who utilize the CRN without realizing it when they access the HPC Center and move their data in and out.

    Was there ever any off-campus component that was referred to as MRI? (Dan Miller)

    Dan Miller received confirmation that MRI has now been renamed CRN and asked if there was ever any off-campus component that was referred to as MRI. Erik replied no. We are working with FSU and FIU, and today Erik had a meeting with somebody from UCF who is interested. They are all part of the Florida Lambdarail and they want to work on building a high-performance computational grid in the state of Florida. Of course, that would be good for all the efforts at these campuses and also for the Florida Lambdarail because it would be the main carrier of that grid. So, Florida Lambdarail is being used as the public name given for the CRN when talking about a state-wide effort.

    What criteria did ICBR meet that required use of the CRN? (Steve Lasley)

    Steve asked Erik what criteria ICBR met that required use of the CRN. Erik responded by explaining some of the details so we could get a good idea of the type of project we are talking about there.

    ICBR has a machine that performs genome analysis and creates a large data file as a result. Currently, those files are several gigabytes in size. They just bought a new machine which will have a typical run of about ten days, and it will produce a terabyte of data during that time. The sequences contained in the data have to be matched with existing databases using BLAST. They currently have a 24-processor cluster and in order to analyze this terabyte database within some reasonable time period (less than a day and hopefully in a number of hours) they are investing in the HPC Center, buying nodes.

    In order to avoid having to copy this terabyte of data across the network they are buying a small cluster together with their storage that will have a parallel file system on it, and they will use this CRN connection to mount that file system on our nodes. They will submit the job and the nodes will read the data straight from their server. That's a typical application where it would just be too complicated to do that through the HealthNet firewall; something which even if accomplished could definitely impact the performance of people within the Health Center or on campus trying to do their normal work. That is why we have this special network.

    3-5) Sectional review of the management standards document (version 2)

    Dan Miller asked if we should review, in the broadest of terms, the various sections of the CRN Management document.

    3-5-1) Section 1.

    Erik proceeded with that, saying that the first section basically explains the general scope of the CRN to make sure there is no misunderstanding about this being like a competition between BellSouth and Cox Cable trying to provide from a different source the same kind of service. We are really trying to be orthogonal, providing something which the campus network cannot.

    3-5-2) Section 2.

    Section two presents the technical requirements of what the CRN is supposed to accomplish.

    3-5-3) Section 3.

    Section three contains a number of items which describe the architecture of how we are trying to accomplish our goals, making sure that we meet the needs of the CRN and the research projects that are funded and at the same time have reasonable security and manageability.

    3-5-4) Section 4.

    Section four gives the technical details of exactly how we accomplish this architecture.

    3-5-5) Section 5.

    Finally, at the very end, section five gives an explicit list of the various machine rooms involved and what they are authorized to do. That is essentially a high-level description of the various ACLs that will be placed on switches to accomplish this.

    3-5-6) A dynamic standard

    This document will have to evolve due to the nature of the CRN network, rather than being a strict and static policy. When new requests come, we will have to look at those and solve whatever new issues arise. As an example, we don't currently have a solution on how to deal with restricted data. We feel that it would be impossible at this point to come up with something that would cover all restricted data; this is because encryption and other requirements would work against performance goals you might need. The philosophy will be that, if a particular type of project arises and they have a particular kind of data, then we will analyze all of it and make sure we have a workable solution. Once that is done, it will be written into this document and authorized if it is acceptable.

    Erik noted that this document is available on-line at the HPC Center site. There is a link there to About HPC@UF which lists all the standards and documents relating to HPC. For instance, the Contract, the Sustainability Plan and this management document as well are all available there. Right now you are looking at version 2, but since the committee met last week there will be some corrections and the updated version will be available on-line.

    3-5) ITAC-NI endorsement

    The committee then voted unanimously to endorse the CRN Management document as presented.


    4) Final discussion and vote on recommendation to not allow external network devices in Wall-Plate buildings

    4-1) Previous discussion

    This committee has addressed "Network Edge Protection" several times previously. Background on the technical details of plan development was first presented at our October 2007 meeting. Most of our December 2007 meeting was spent reviewing the CNS Network Edge Protection plan and discussing what had now evolved into a CNS recommendation to disallow external network devices in Wall-Plate networks. CNS sought the endorsement of ITAC-NI for their recommendation, but due to the lack of a quorum a vote was tabled. By our last (January 2008) meeting, the policy proposal had been formalized into written form. With the endorsement of this committee, CNS hoped to propose this to administration as official UF policy. Ensuing discussion suggested that the proposal needed to be better codified as both policy and standard documents prior to ITAC-NI voting on the issue.

    4-2) "Requirements for Connecting to the Wall-Plate Data Network" standards document

    A newly written document was provided via e-mail less than two hours prior to today's meeting and as a handout at the meeting itself. This document is presented in-line following.

    Requirements for Connecting to the Wall-Plate Data Network

    February 13, 2008

    The data network has become a critical component of the campus infrastructure, serving almost every aspect of the UF mission. For example: WebCT is used in teaching and learning. High Performance Computing is used in Research. MyUFL systems are used in HR and Finance administration, and Student services. And GatorLink e-mail and Directory Services are used by just about everyone. All ride on the campus data network.

    In order to assure that the data network remains robust, reliable, and secure, the Provost’s Office has been investing in a multi-year, multi-million dollar project and ongoing program to upgrade and expand the campus data network. Known as the Wall-Plate project, over the past 3 years about ½ of all data ports on campus have been replaced. Over the next 3 years, most of the remaining data ports will be replaced. Similar network upgrades have occurred in Housing and the Health Sciences Center.

    Current Wall-Plate program requirements prohibit the use of any device not managed by CNS, which extends the network beyond the Wall-Plate data port. Such devices can cause problems in the network. Examples include: hubs, switches, routers, and wireless access points. Such devices defeat the purpose of having upgraded to a modern, ubiquitous, and standardized data network.

    Any unit, choosing to “op-in” to the centrally funded Wall-Plate program, must remove local network hubs and resolve related wiring problems. Because Wall-Plate data ports are paid for centrally, most units choose to opt-in. Because the cost of fixing any in-building wiring problems must be paid by the local unit, some units may perceive these costs as prohibitive. This is especially true in buildings where substandard wiring and/or hubs have been used for years to avoid the cost of wiring upgrades when new users were added to the network.

    Effective July 1, 2008, network monitoring and protection methods will automatically shut down any Wall-Plate port showing symptoms of such devices or similar problematic traffic. Exceptions must be reviewed and approved by CNS. A description of central Wall-Plate services provided by CNS, and local unit requirements for connecting to the Wall-Plate Data Network, are documented at: http://www.cns.ufl.edu/wallplate/CNS_Wall-Plate_SLO.pdf.

    4-3) Associated Service Level Objectives

    Excerpts and additions to the Wall-Plate data network service level objectives (SLOs) were provided on the second page of the handout:

    Wall-Plate Data Network

    Service Level Objectives

    Excerpt from page 4:

    • Local administrators and users must not attempt to implement their own network infrastructure. This includes, but is not limited to basic network devices such as hubs, switches, routers, network firewalls, and wireless access points. They must not offer alternate methods of access to UF IT resources such as modems and virtual private networks (VPNs). Active electronics that expand the network connectivity beyond that of the wall plate must be approved and managed by CNS for the purpose of providing a secure and reliable network for all users.

    Excerpt from page 5:

    • CNS will not troubleshoot any network problem where a user or local administrator has deployed active electronics for the purpose of expanding the network connectivity beyond that of the wall plate.
    • CNS will disable or disconnect any LAN segment that has been altered to expand the network connectivity beyond that of the wall plate.

    Page 14, Addendum 1:

    1. Procedure for Requesting Exception to Standards or Policies

    Local administrators may request exceptions to Wall-Plate policies or standards by submitting a written request to CNS by clicking on “Request Network Service” on the CNS web page at http://www.cns.ufl.edu/

    Requests for exceptions to the policy will be evaluated on the following criteria:

    • Justification for requesting the exception
    • Risk to network stability or security
    • Impact or cost to the requesting department, CNS, or the Wall-Plate Project
    • Length of time requested for the exception
    • Adequate advance notice for proper planning and deployment
    • Available resources

    Some examples for which exceptions may be granted are the following:

    • Creating swing space during construction or renovation projects
    • One-time special events
    • Waiting for new cable to be installed
    • “Tech Bench switch” for IT staff locations
    • Pilot, demonstration, or evaluation projects

    4-4) Project history and direction

    4-4-1) CNS's third attempt

    Tim Fitzpatrick introduced the discussion by saying this was the third time CNS had brought this recommendation before this group. He told us that today this is being presented as Requirements for Connecting to the Wall-Plate Data Network. Tim said that previously this had been presented as a plan to implement port security on the Wall-Plate network and originally as a policy prohibiting hubs from being connected to the Wall-Plate. Tim said that, for a variety of reasons, it was never quite expressed in a format which was viewed as something the committee could endorse. Now we have this new draft which CNS would like for ITAC-NI to give either a thumbs-up or thumbs-down.

    4-4-2) History of the Wall-Plate

    Tim quickly stated as background that the Wall-Plate network serves campus; it does not serve the Health Science Center, Shands or Housing. In size it was scoped at roughly 25,000 ports on the main campus and we thought there were roughly 25,000 also on Shands, HealthNet and Housing. Now it looks like there may be 35,000 ports on campus, awareness of this inflation having occurred since the funding model changed. The Wall-Plate service was first offered on a pay-for-service basis four years ago. The move to provide such services was rooted in the networking portion of the 2002 Strategic Computing Plan. The goals of the Wall-Plate service were for it to be available, reliable, secure, standard, modern, ubiquitous and interoperable. The goal was basically to make any application of any type from any place work across the network.

    4-4-3) Focus and expectations of the Wall-Plate

    This service is about "building networks", that is networks within our buildings. Networks within buildings have been, and to some extent still are, the responsibility of local colleges and departments. Four years ago we said that we were going to get on a network upgrade plan/strategy/approach and offer that on a fee-for-service basis. To join the Wall-Plate required payment of $5/port/month. Over the first two years we signed up roughly 10,000 ports; this was four times what we thought we would do. When we did that, we alluded to the fact that if you have hubs out there you really need to get rid of them and incur the wiring costs of replacing hubs; you have deficient wiring out there, you need to fix it, and that is prerequisite for the Wall-Plate service. However, Tim doesn't think it was ever clearly or absolutely stated. It was an expectation that if you want an available, reliable, secure etc. network then you have to do these things right.

    4-4-4) New Wall-Plate funding model

    Roughly one year ago, based on the growth and expansion of Wall-Plate as a fee-for-service, the Provost decided to fund it centrally as a strategic objective. The provost is basically providing $1.5 million per year to operate, expand and maintain the Wall-Plate effort. Charles asked Tim if they saw the level of funding changing on that with the current budget problems. Tim responded that he did not. He did say that he would comment on the implications of the budget shortfall in a moment.

    4-4-5) Three-year centrally-subsidized project has begun

    So there is now within CNS a three-year project to roll out this network. Of those 35,000 ports we are getting to about 16,000 currently, so are about half-way through and have about 2.5 years to get the other half. We have begun meeting with department heads unit-by-unit. We have a three year schedule that describes who's first and who's next. When we approach them we say, here are the choices which you have, here are the costs that we will cover and the investments which we will make, and here are the costs which you will have to cover. We explicitly say that you have to handle your hub problem and your wiring problem. Tim is not sure that has ever been clearly stated in our roughly 15-page Service Level Objectives document which details what we do, what you will get from the Wall-Plate, and what you must do to participate.

    We have gone from customer to customer over the last six months saying [as an example] we're going to invest $100,000 in switches, outside plant and labor, but you've got to spend $20,000 on solving wiring problems to get rid of your hubs. At this point, most people say...aaaaah...okay. I put in a buck, you put in $4--that's a pretty good deal for me. On the VoIP side it is a 50/50 split; they put $140 in for a phone and we put $140 in as well.

    4-5) Local unit costs have become a concern

    The idea was for central funding and investment to cover most of the cost, but that there would still be some local expense to get into the game. We have expressed that clearly in terms of the costs, but never yet in terms of the requirement: get rid of the hubs and if you replace them on day one don't bring them back later. The reality out there is that we have been fairly successful over the last six months with people saying yes I'm willing to pony-up a few bucks on my side, or even put more than a few bucks on my side because I'm getting a large matching investment from central funding. However, we are now approaching financially troubled times, and we are bumping into customers who are saying they have all these hubs out there with absent or deficient wiring and they just can't afford the price of getting rid of them.

    4-5-1) CNS will not waive wiring remediation

    What they want us to say is that we will waive the restriction. Tim said he is at the point where he needs to take a stand. This document makes that point saying you cannot join the Wall-Plate program if you do not remove your hubs. Hubs are the major problem, but there are all sorts of devices beyond the Wall-Point that may cause problems. That leads us to the question of special cases and how CNS is going to handle that. Are we going to be super strict and stringent or are we going to be part and partner? Tim said that he would like to think that CNS will listen on a case-by-case basis to requests for specials and be reasonable about handling them.

    The document before us today contains text that Tim hopes will ultimately wind-up in a DDD memo. The three paragraphs of attachments on the second page are basically excerpts from the most recent update of the Wall-Plate Service Level Objectives document that is a roughly 15-page document which says here is what you get and here is what you must do. It is not unlike the standards document which Erik just presented.

    4-5-2) The tie between hub removal and port security

    Tim then said that we were here again today to hear any comments and concerns on this. Dan Miller mentioned that he had received e-mail from Dan Stoner on this issue a while back and he had expressed some confusion on why port security, which he thinks is a good thing, is being linked to the ban on external devices. Dan wanted to review for the committee in general what he thinks is a pretty simple correlation. In trials with port security, CNS has found that the benefits are largely out-weighed by the consequences of having port security in an environment where there is not a strict ban on users or administrators deploying these devices. It complicates the situation and causes false positives affecting the real and perceived ability of the network. So CNS took the position that we can't really go forward with port security unless we have a more clear understanding of how the network should be operated.

    4-5-3) Port security implementation will affect current customers too

    Tim mentioned that there was something which he should add to his previous comments. The document before us presents what a unit must agree to in order to participate in the Wall-Plate, but also implicit in that is how we are going to enforce that. We are going to do that by a port security scan. There are four or five reasons we want to do port security scanning that have nothing to do with hubs; we want to do port security scanning as a "value-add". When we do that we will be able to detect these hubs, and if we automate the alert and the action based on the alert, it is going to shut down those ports.

    What that means for people who are out there today in the Wall-Plate network, either the old base coming from the days of fee-for-service or the recent ones over the last six to eight months, is that turning on port security is going to shut down any ports with hubs. That means there is a transition plan for anyone who is out of compliance at this moment and we need to accommodate the transition plan. This document says we are going to flip the switch effective July 1. What this means is that over the next five months users need to see if they have such devices deployed and deal with them. On the other hand, some units will have problems which are so large that they are not solvable. The rest of the story is if you think you have a special case then come and talk to us about it and we will figure out a time-table and a game-plan. What we can't do is just indulge this and ignore it forever.

    4-6) Can there be any assistance with wiring remediation?

    4-6-1) How about wiring at cost?

    Chris Leopold mentioned that wiring mitigation is likely to be a big issue within IFAS and if reports of disproportionate budget cuts for IFAS are true, Chris expects there will be little money to address these issues. Chris asked if there was any way we could mitigate those costs rather than just saying wiring is your problem and you have to deal with it or not join the Wall-Plate. For example, perhaps CNS could provide a team that would install wiring at cost.

    4-6-2) CNS has moved toward out-sourcing wiring installation

    Tim responded that over the last four or five years we have changed our direction toward outsourcing wiring installation. We have Judy Hulton, who is a no-fee-for-service consultant to assist in finding a contractor, doing it right, and getting a decent price; but we have been moving out of the do-it-yourself wiring business step-by-step. Tim said that Chris might be right that if you were really in the business you might be able to do installation a bit cheaper in-house, but he felt the cost savings were marginal. In looking over this the last several years their assumptions have been that we want to get out of that business and it would be better to contract out.

    4-6-3) Sub-contracting does not increase costs

    Ty responded that CNS is essentially providing consulting, design and installation management for free and that sub-contracting out the installation does not really increase the costs. Ty said, in the HSC's experience, it is actually more expensive overall to have the people in-house. When you hire a contractor you pay them only for hours actually worked. You can't hire quality people and tell them we only need you for two hours on Tuesday and three hours on Wednesday; you have to give them a real job. Then they spend some amount of their time not actually working, but you still have to pay them.

    Tim asked for input from Todd Hester who used to be responsible for this from a local perspective and who is now responsible for pieces of this centrally. Todd responded that every time they looked at this from a financial standpoint, unless you use students and treat them poorly, it never pays off. Even at CIRCA they had switched from doing it themselves to hiring it out. While one might think initially that there could be some savings there, in reality there is not.

    4-6-4) Is it a matter of keeping crews busy?

    Chris Leopold asked if that was because people were just sitting there waiting for the next job to come up. The experience Chris has seen with IFAS is that if installation was done at cost we could certainly keep them busy. Todd said that that would still be a short-term thing; do we hire people for a year, burn them out and then let them go? If we outsource it, after our remediation is complete, the workers will still have jobs. Chris answered that the flip-side of that is that a unit might not have the money and so cannot join the Wall-Plate. Ty said that you really don't want Tempforce workers installing your wiring because you would have to live with the consequences of the poor job, but Chris believed that quality could be ensured via proper management.

    4-6-5) Resources not available to manage wiring installation

    Tim responded that currently CNS has three five-person teams. Each team has a lead and two field engineers and two junior assistants. That adds up to fifteen people, plus each team should have at least one OPS and maybe more. Our current status is that we are hustling as fast as we can from place-to-place and project-to-project. We are down two FTE in the ranks and we are down two OPS; plus, Tim believes that it has been stressed to management to keep those teams working efficiently and on schedule. So Tim does not know where the management would come from to drive a wiring crew on top of what they are doing now.

    4-6-6) How this is handled at Housing

    Tim then asked Charles what they do at Housing. Charles responded that they do it all in-house. The maintenance area has people who do that. Tim asked if they use OPS, students or full-timers and Charles answered that they use full-time staff. Of course, those people do much more than just pull wire.

    Ty said that keeping people busy full-time is where the real benefit comes in. If today you need two people, your contractor sends two people; if tomorrow you need a half-dozen, they send that many. If you were doing it yourself, in order to handle that load you would have to have a half-dozen people on payroll and a lot of their time would be spent idle because you are unable to keep them busy full-time. Even if you can keep people busy full-time in the short run, you are then put into the position of having to tell people you can only hire them for a specific duration--so maybe you get them and maybe you don't. Maybe in today's climate you can get quality workers like that because construction is a little off. In any case, that is why the HSC uses contractors.

    Charles said that Housing also has a somewhat different situation in that they have already made their big push to get most areas properly wired; they don't have to do a great deal of that on an on-going basis anymore. Now they are an "add/move/change" group as opposed to "new-implementation/replacement" group.

    4-7) Other possibilities for handling cost issues

    4-7-1) The possibility of further cost sharing

    Tim responded in general to the financial crisis which we are all in and to the issue of wiring as a prerequisite to entry. Tim has told Marc Hoit that CNS really wants to enforce the no-hub policy and enable the port security capabilities. Those will not only impose on the decisions of future units to join, but will they will impose costs on existing Wall-Plate customers because we have been somewhat loose about that the last four years. Marc Hoit responded by asking what the cost per drop would be, how many drops are problematic, and over what time period those corrections would be implemented. He was trying to put a dollar amount on the issue. As an example, if this is a $1 million problem over four years then this is a $250,000 a year problem. If we offer 50/50 price sharing on that then...well, Tim just doesn't know.

    In summary, Tim said that Dr. Hoit is aware of the issue and Tim's thought is that units which choose not to join Wall-Plate or VoIP, for whatever reason, ultimately need to have a conversation with the CIO and the Provost to whom the CIO reports. One thing they can say is I can't get on-board because of costs. Then the Provost might come back and say "let's make a deal". The other thing the Provost might say is that UF is already investing all this money and you need to find the difference. Tim honestly doesn't know what would happen, but Tim is trying to kick this up a level as a matter of interest to someone who can actually solve the problem. From CNS's standpoint of having to absorb 35,000 ports on a 25,000 port budget, they simply can't afford to help--but there might be another way.

    4-7-2) Wireless won't help much here

    Charles asked if wireless might solve the problem in certain areas. Chris responded that he really believes it has to be wired. Chris pointed out Entomology's situation as being pretty typical of IFAS. Bernard Mair said that this was going to be a big issue for CLAS as well, though he didn't know the specifics of how many wiring drops would be needed. Bernard asked and received clarification that CLAS would need to have wiring mitigation completed prior to joining the Wall-Plate.

    If you are already in Wall-Plate, CNS would have previously solved the wiring issue in theory; but in practice what probably happened is that after Wall-Plate was installed and hubs got flushed out they got reinserted because of growth. Tim said that while it may be a bitter statement to say "if you can't solve the wiring problem you can't join", it is even more difficult to say "if you are already in and you have say $30,000 of wiring problems out there, then you have six-nine months to fix it." That is basically what the document before us today is saying.

    4-7-3) The extent of the hub problem has not been accurately measured

    Steve asked Dan Miller if data wasn't available for estimating the size of the problem on the current Wall-Plate. Dan responded that they have done some surveys and have found some hubs out there, but he could not supply any estimates of those numbers at this time.

    4-8) Amendment proposed by Charles Benjamin

    Charles then handed out some proposed changes to the document. While passing this out, Charles mentioned that he would really appreciate it if Dan could get materials to us sooner than two hours prior to these meetings. Dan responded that CNS did the best they could.

    4-8-1) Concern over excluding ALL non-CNS managed devices

    Charles wanted paragraph three of the requirements document to read:

    Proposed: Current Wall-Plate program requirements prohibit the use of certain network devices not managed by CNS, which extends the network beyond the Wall-Plate data port. Such devices can cause problems in the network. Examples include: hubs, switches, routers, and wireless access points. Such devices defeat the purpose of having upgraded to a modern, ubiquitous, and standardized data network. Other network devices that do not fall in the general scope of a hub, switch, router and wireless access point such as a firewall or VPN appliance will be considered on an individual request basis, may be managed by the department or CNS, and coordinated between the Department making the request and CNS.

    As Charles had expressed previously, he wasn't comfortable for the standard to state that the Wall-Plate would prohibit the use of any device not managed by CNS. He wanted to permit the inclusion of other network devices.

    Charles also wanted to completely strike the page 4 excerpt from the Service Level Objectives. Regarding that, Charles indicated that this paragraph seemed to be a clear poke-in-the-eye about the VPN/Firewall which he has implemented between Housing and Tigert. Tim responded that when Todd added this section he took it almost word-for-word from the AUP. The most critical addition made was to include "local administrators" in the same category as end-users.

    4-8-2) This mandate will waste local networking expertise and resources

    Chris Leopold said that he had been considering proposing something similar to what Charles was suggesting. Chris said that rather than taking the draconian centrally-managed approach, CNS could take a more collegial team approach and say that properly trained and certified unit-level staff could be trusted to assist with the management of certain devices. Chris is concerned over the FTE levels within CNS and how that might affect their ability to support every network device everywhere. Chris feels a more shared management approach could overcome some of the potential shortcomings of that situation. Dan Miller responded that CNS feels they do have the necessary staff to support the proposed model.

    4-8-3) Listing certain exceptions within the standard itself

    Tim asked and received clarification from Charles that he was proposing that certain exceptions be listed directly in the standard document. Tim responded that his take was "absolutely not". There will be a general rule of nothing beyond the ports; however, if you have a special case then we will talk about it. Charles responded that his proposed amendment clearly states this same expectation. Comment ensued that there was no reason to specify some exceptions in advance since exceptions were clearly handled on a case-by-case basis. Some further discussion continued along similar lines.

    4-8-4) Amendment dropped for lack of second

    Dan Miller asked Charles if he would like us to have a vote on the matter and Charles responded that he would. Dan then asked if anyone would second the need for a vote on this issue. No second was made and the proposed amendment was dropped.

    4-9) Clarification on exceptions

    Bernard asked once again for clarification that there would be no exceptions allowed with regards to wiring for the removal of hubs. Tim responded that this was correct. The only possible recourse there would be to make a request for cost sharing to some level above CNS. In that case, Bernard wanted to know the reasons behind the need for any exceptions. Tim responded by discussing the typical use of the Wall-Plate network.

    One example would be a person in an office with a workstation and possibly a telephone; such an instance would need a network connection which precisely meets all the stated requirements. Within departments there are also labs, conference rooms, work benches and laptops in addition; there are all sorts of other things that people do on a temporary or permanent basis that also need a network connection. These latter cases are the areas in general where we would be talking about handling exceptions. We would look to see if we could handle those with wireless or via some other fashion. Tim can tell you, however, that if the exception you want is that you can't afford 200 ports to resolve 200 wiring problems, then Tim is going to say that he is sorry but he can't help you. If you say you have a special case of a workbench, lab or conference room, then we would be happy to work with you to resolve the matter.

    4-10) Cost details of recent deployment at the Museum of Natural History

    The last thing Tim wanted to mention was that he just had a meeting with Doug Jones, who is the Director of the Museum of Natural History. CNS has just completed a six-month VoIP Wall-Plate installation to that organization in Powell and McQuire. That installation involved 600 ports and here is the investment profile for that:

    • $100,000 plus on electronics and telecom room build-out
    • $5,000 on inside-building cable
    • $30,000 on enhanced fiber to the building because they had some special research activities and needs
    • $30,000 installation costs

    In all, CNS via Provost central funding invested $175,000 in getting the Museum onto Wall-Plate. In discussion prior, the museum administration had said that they had hubs and wiring problems and they didn't like it but would see if they could scramble together the one-time money. They ended up paying $28,000 and they basically said they were glad they did it and were glad they were now on the Wall-Plate. For every dollar they put in on this, the Provost put in four. If you approach the Provost for assistance, they might provide some cost sharing or they might point to how much they are already putting in; Tim just doesn't know how they would respond. Tim does know, however, that Marc Hoit asked how much it would cost to help people with their wiring problem.

    4-11) Motion to delay vote

    Chris Leopold mentioned having some issues with the way this proposal was worded. Because of the short notice (the requirements and related service level objectives were distributed less than two hours prior to this meeting) Chris Leopold made a motion which was seconded by Steve Lasley that we postpone voting on this matter until next time. This would give time to prepare a more thoughtful response. Dan Miller responded that he felt the issues are pretty clear. While we could quibble about wording, there is a standard and an exceptions process. CNS wants to be reasonable and want the customers of the Wall-Plate to be reasonable. Dan also noted, as had Chris, that we have talked about this over multiple meetings. Dan then raised for a vote the question: "How many would like to have a vote today?" The yeas won by the slimmest of margins: 6 to 5.

    4-12) Wireless might help more than realized

    Dan Miller had an additional comment on wireless. While that is a complex and evolving issue, Dan feels that 802.11n access points (APs), which HealthNet and CNS plan to deploy soon, will improve our wireless infrastructure--especially in the fringe areas where the signal strength isn't all that great. That remains to be proven in the field and we should know more in a few months. Dan believes that this new technology will make wireless more acceptable for occasional use. Perhaps units like IFAS, which are very tight on money, could prioritize getting cable to faculty and letting others be serviced by wireless until money was available. Chris responded that, if he had the choice between spending $50 on a wireless card vs. a drop that he would rather put the money into the wiring. Chris said that if there was any way we could encourage upper administration to help offset wiring remediation costs, that would be most appreciated.

    4-13) Regrets over lack of cost analysis

    Steve expressed his wish that UF had looked a little more closely at the true overall costs of a Wall-Plate where no external network devices are allowed. Due particularly to the age of their current phone system, Steve felt that his unit was stuck in a very uncomfortable position somewhere between not being able to afford the Wall-Plate and not being able to afford to opt out of it. Beyond the costs involved Steve is also concerned that this policy essentially turns local unit IT staff into end users of the network. Steve does not believe this makes wise use of current staff resources and it seems to him that there are alternate ways to handle this that would be less expensive for the university as a whole while still providing a great improvement in the overall reliability and stability of our network.

    4-14) Will Wall-Plate eventually be mandated?

    Stan Anders asked if CNS expected Wall-Plate participation to become mandatory sometime in the future. Tim responded that he didn't know but has personally entertained the idea. Dr. Hoit, however, wants this to be voluntary; consequently, Tim doesn't feel that a mandate is coming any time soon. However, when this project was started on a fee-for-service basis and quickly achieved 40% buy-in, it got the Provost's attention to subsidize it. Now that we have a massive roll out and once we achieve an 80-90% share you never know--but that would be three years down-the-road at best. Stan mentioned that some units may hold out under the hope that it would be free for them when mandated. Tim doesn't think joining will ever be "free" because he doesn't know where the money would come from.

    4-15) A sustainable plan

    Tim mentioned that he has seen projects before where network upgrades have been implemented end-to-end including every port in every building as well as the core electronics. They say they will do it in two years with one-time money. They hire consultants and contractors and out-source this-and-that. They spend $20 million to upgrade the entire campus and end up with a major improvement of the network with no money to sustain it--no funds to expand it and no funds for life-cycle replacement. In contrast, we have used the approach where we are applying recurring annual funds to make this sustainable. The next challenge is to find additional funding should (when) the number of ports continues to increase.

    4-16) The inevitable eventual wiring replacement

    Craig asked how we are planning for eventual building wiring replacements. This has been a question with Shands and has been a question with CNS. Craig said that wiring fails in 12-20 years. Tim doesn't have an answer for that, but believes that the technology and the costs thereof will be totally different by the time we will need to deal with this issue.

    4-17) Self-serving CNS policy

    Charles Benjamin commented that as service providers and people who serve other departments in one capacity or another there is a balance between serving the people's needs and self-serving policies. The way this document is worded is leaning towards being more self-serving for CNS than it is in being flexible in serving the needs of departments being supported.

    4-18) ITAC-NI recommends the CNS Standards document (8-2-1)

    The proposal was then voted on with eight voting for it, two voting against and one abstention. The measure passed.

    4-18-1) HSC has been happy with similar system

    Craig mentioned that this has been done at HSC for many years and that as long as you have an exception process and as long as no one tries to circumvent the policy, it works. We haven't had a problem down there. We have had people roll out VPNs and their own access points and it does cause issues. But as long as you work with Tom (Livoti)/HealthNet, and Craig is sure that as long as Wall-Plate customers work with CNS, they'll work with you too. Ty mentioned that HealthNet has been doing this for twelve years. Craig said that they pay $13.22/port/month--not $5. Craig truly believes it works out better for everybody.


    Action Items

    1. Subscribe Dan Miller, ITAC-NI chair, to all other ITAC committee lists for collaboration purposes (pending from previous meeting).
    2. Draft a committee position statement on our need for a multi-year plan for reclaiming IPv4 address space (pending from previous meeting).
    3. Discuss the issue of network security and how a unit's ISM fits into that equation. Will the ISM have access to the Wall-Plate switch or other management web page to disable offending ports or track down MAC addresses? (pending from previous meeting)
    4. Schedule times for miscellaneous agenda topics including: 2nd-site redundancy plans, UF Exchange project, 802.11n wireless, IPv6, IPv4 space usage, and content/site blocking.

    Next Meeting

    The next regular meeting is tentatively scheduled for Thursday, March 13th.


last edited 25 February 2008 by Steve Lasley