170 likes | 250 Vues
ESLEA and HEP’s Work on UKLight Network. ESLEA. Exploitation of Switched Lightpaths in E-sciences Applications Multi-disciplined Protocol Development Exploitation by HEP ( ATLAS and CDF), Radio-astronomers, E-Health, HPC
E N D
ESLEA • Exploitation of Switched Lightpaths in E-sciences Applications • Multi-disciplined • Protocol Development • Exploitation by HEP ( ATLAS and CDF), Radio-astronomers, E-Health, HPC • Using dedicated point-to-point light path channels on research UKLight Network for R&D purposes • Bulk Data Transfers / Circuit Reservation and deployment /Transport Protocols / Real Time Visualization
HEP Connections • RAL-CERN • UCL-Fermilab • Lancaster-Edinburgh • RAL-Lancaster • SARA-Lancaster • Lancaster-Manchester
Lancaster <-> Edinburgh Objectives • Investigate the use of an alternate (in this case UDT) protocol to maximise the potential of an optical circuit • Utilise this protocol in such a was as to be of practicable use to users of the grid.
What is UDT ? • UDT: UDP-based Data Transfer Protocol • Application level, end-to-end, unicast, reliable, connection-oriented, data transport protocol. • Approximately 90% utilisation of available bandwidth
Servers • Hardware : • Dual Xeon 3.2GHz dual core • 2 GB RAM • Dual PCI-X bus • 2 x Gigabit Ethernet • SATA Raid controller • 6 x SATA disks • 1 x SATA system disk • OS • Scientific Linux 3.0.5 with 2.4.21
Network Testing • Tests were performed with default kernel and application settings and then again after applying changes to maximise network speeds • BDP for this link should be : • BDP = Bandwidth (MB/s) * RTT (seconds) • BDP = (1 * 1024 / 8) * (0.3 / 1000) • BDP = 0.0384 MB (39.32KB)
What next ? • The Basic Network tests and the File transfer tests need to be re-performed once the UKLight link between Lancaster and Edinburgh is fully functional • Integration of UDT into a functional GridFTP server and client • Deployment of modified software into test LCG sites.
Lancaster<->RAL Link • T1-T2 transfer testing • Avoid production network induced bottlenecks • Firewall @ RAL • Internal LAN traffic • Tested using : • Command line srmcp in shell script • FTS controlled transfers
Achieved • Peak of 948Mbps • Transferred: • 8TB in 24 hours - 800+ Mbps aggregate rate • 36TB in 1 week - 500+ Mbps aggregate rate • Over 800Mbps when running, but 0Mbps in downtimes a problem • Parallel file transfers increase rate • Better utilisation of bandwidth • Staggered initialisation of transfers reduces overhead from initialisation/cessation of individual transfers. Rate increase from 150Mbps to 900Mbps • 2% (18Mbps) reverse traffic flow for 900Mbps transfer
FTS transfers not yet as successful as srmcp only transfers • Greater overheads? • More optimisation needed • Single FTS file transfer gives 150Mbps • Same as srmcp • Concurrent FTS file transfers scales at lower rate than srmcp • All single stream transfers • FTS tests currently used single source file • Srmcp used with multiple source files • Rate varies dependent on direction • Possibly explained by difference in dCache setup • V0 Dependency • kernel settings • Disk I/O limitations • SRM pool load balancing • To be investigated • File size affects rate of transfer • Single stream rate varies 150 to 180 Mbps with increase from 1 to 10 GB file size
Lancaster<->SARA Link • Link not yet active • Tests similar to Lancaster-RAL and Lancaster-Edinburgh Tests • Bulk File Transfers • UDT Protocol Testing • Study of effect of International/Extended link length • SARA storage capacity underused, RAL capacity currently too small for UK simulation storage • Also, SARA to test ATLAS Tier1 fallback scenario (FTS catalogues etc.) • Are we capable of connecting to an alternate Tier1?
Lancaster<->Manchester Link • Intra-Tier2 site Testing • “Homogeneous Distributed Tier2” • dCache Head node at Lancaster, pool nodes at both Lancaster and Manchester • Test Transfers to/from RAL • Test of Job submission to close CE/WN’s • Possible testing of xrootd within dCache
www.eslea.uklight.ac.uk • Connecting to UKLight • Documents