User Tools

Site Tools


sfxc

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
sfxc [2016/04/14 09:39] keimpemasfxc [2023/07/12 11:45] (current) keimpema
Line 1: Line 1:
 ====== The EVN Software Correlator at JIVE (SFXC) ====== ====== The EVN Software Correlator at JIVE (SFXC) ======
  
-The new EVN/JIVE software correlator was installed in February/March 2010 and has taken over all correlation in December 2012.  The original cluster has been extended in 2011 and again in 2012 and currently handles 13 stations at 1Gbit/s in real-time e-VLBI mode and many more for traditional disk-based VLBI.  Here's some documentation on how this amazing machine works and was put together.+The new EVN/JIVE software correlator was installed in February/March 2010 and has taken over all correlation in December 2012.  The original cluster has been extended several times and currently handles approximately 20 stations 
 +at 2Gbit/s in real-time e-VLBI mode and many more for traditional disk-based VLBI.  Here's some documentation on how this amazing machine works and was put together.
  
 === Usage === === Usage ===
  
-A very basic User's Manual for SFXC can be found [[http://www.jive.nl/~kettenis/sfxc.html|here]].+A very basic User's Manual for SFXC can be found [[sfxc-guide|here]]. 
 + 
 +We kindly request users of SFXC to reference our paper that describes its algorithm and implementation 
 +[[http://adsabs.harvard.edu/abs/2015ExA....39..259K|The SFXC software correlator for very long baseline interferometry: algorithms and implementation, A. Keimpema et. al., Experimental Astronomy, Volume 39, Issue 2, pp.259-279]].
  
 === SFXC software installation === === SFXC software installation ===
  
 The SFXC software correlator can be distributed under the terms of the General Public License (GPL). The SFXC software correlator can be distributed under the terms of the General Public License (GPL).
-We provide read-only access to the SFXC SVN code repository at [[https://svn.astron.nl/sfxc]].  The current production release is taken from the stable-3.branch which can be checked out using:+We provide read-only access to the SFXC SVN code repository at [[https://svn.astron.nl/sfxc]].  The current production release is taken from the stable-5.branch which can be checked out using:
  
-  svn checkout https://svn.astron.nl/sfxc/branches/stable-3.4+  svn checkout https://svn.astron.nl/sfxc/branches/stable-5.1
      
 In principle this branch will only receive bug fixes.  Development of new features happens on the trunk, which can be checked out using: In principle this branch will only receive bug fixes.  Development of new features happens on the trunk, which can be checked out using:
Line 115: Line 119:
 === Post-processing software === === Post-processing software ===
  
-To convert the SFXC correlator output into FITS-IDI, additional tools are needed.  Information on how to obtain and build these tools is available at [[http://www.jive.nl/~kettenis/sfxc/tools/README]].+To convert the SFXC correlator output into FITS-IDI, additional tools are needed.  Information on how to obtain and build these tools is available at [[https://code.jive.eu/verkout/jive-casa]].
  
  
 === Cluster Description === === Cluster Description ===
  
-The cluster currently consists of ten Transtec Calleo 642 servers, each containing four nodes with a mix of dual-CPU quad-Core CPUs and octa-Core CPUs, for a grand total of 384 cores. The nodes are interconnected by QDR Infiniband (40Gb/s) and are also connected to a dedicated ethernet switch with dual 1Gb/s ethernet links per node . The 23 Mark5s at JIVE are connected to the same networking at 1Gb/s in order to play back diskpacks for correlation. There is a 36-port Infiniband switch, another 18-port Infiniband switch, and a head-node for central administration and NFS exported homedirectories.+The cluster currently consists of eleven Transtec Calleo 642 servers, each containing four nodes with a mix of dual-CPU quad-Core CPUs and octa-Core CPUs, for a grand total of 512 cores. The nodes are interconnected by QDR Infiniband (40Gb/s) and are also connected to a dedicated ethernet switch with dual 1Gb/s ethernet links or a single 10Gb/s ethert link per node. The 23 Mark5s at JIVE are connected to the same networking at 10Gb/s in order to play back diskpacks for correlation.  The 5 FlexBuffs are integrated in the cluster as well and use the same QDR Infiniband network as the nodes. There is a 36-port Infiniband switch, another 24-port Infiniband switch, and a head-node for central administration and NFS exported homedirectories.
  
 == Cluster Nodes == == Cluster Nodes ==
Line 129: Line 133:
   * 24 GB DDR-3 memory   * 24 GB DDR-3 memory
   * Two 1 TB disks (Seagate Barracuda ES.2 SATA-2 7200rpm)   * Two 1 TB disks (Seagate Barracuda ES.2 SATA-2 7200rpm)
-  * Dual 1 Gb/s (Intel 82576)+  * Dual 1 Gb/s Ethernet (Intel 82576)
   * Mellanox ConnectX QDR Infiniband (40 Gb/s)   * Mellanox ConnectX QDR Infiniband (40 Gb/s)
   * IPMI 2.0 management   * IPMI 2.0 management
Line 136: Line 140:
   * 24 GB DDR-3 memory   * 24 GB DDR-3 memory
   * Two 2 TB disks (Seagate Constellation ES SATA-2 7200rpm)   * Two 2 TB disks (Seagate Constellation ES SATA-2 7200rpm)
-  * Dual 1 Gb/s (Intel 82574L)+  * Dual 1 Gb/s Ethernet (Intel 82574L)
   * Mellanox ConnectX QDR Infiniband (40 Gb/s)   * Mellanox ConnectX QDR Infiniband (40 Gb/s)
   * IPMI 2.0 management   * IPMI 2.0 management
Line 143: Line 147:
   * 64 GB DDR-3 memory   * 64 GB DDR-3 memory
   * One 60 GB SSD (Intel 520 Series)   * One 60 GB SSD (Intel 520 Series)
-  * Dual 1 Gb/s (Intel I350)+  * Dual 1 Gb/s Ethernet (Intel I350)
   * Mellanox ConnectX QDR Infiniband (40 Gb/s)   * Mellanox ConnectX QDR Infiniband (40 Gb/s)
 +  * IPMI 2.0 management
 +
 +  * Dual Intel E5-2630 v3 octa-core Xeon CPUs (2.40 GHz, 20 MB cache)
 +  * 64 GB DDR-3 memory
 +  * One 60 GB SSD (Intel 520 Series)
 +  * Dual 10 Gb/s Ethernet (Intel X540-AT2)
 +  * QLogic IBA7322 QDR Infiniband (40 Gb/s)
   * IPMI 2.0 management   * IPMI 2.0 management
  
Line 176: Line 187:
   * 2x J8712A 875W power supply   * 2x J8712A 875W power supply
 There is an additional 48 port Allied Telesys switch for connecting the IPMI ports.\\ There is an additional 48 port Allied Telesys switch for connecting the IPMI ports.\\
-The IP-range 10.88.1.xx(/24) is assigned to the cluster, with a subdomain of //sfxc.jive.nl//. 
  
  
sfxc.1460626740.txt.gz · Last modified: 2016/04/14 09:39 by keimpema