User Tools

Site Tools


sfxc

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
sfxc [2013/11/25 11:09] kettenissfxc [2023/07/12 11:45] (current) keimpema
Line 1: Line 1:
 ====== The EVN Software Correlator at JIVE (SFXC) ====== ====== The EVN Software Correlator at JIVE (SFXC) ======
  
-The new EVN/JIVE software correlator was installed in February/March 2010 and has taken over all correlation in December 2012.  The original cluster has been extended in 2011 and again in 2012 and currently handles 13 stations at 1Gbit/s in real-time e-VLBI mode and many more for traditional disk-based VLBI.  Here's some documentation on how this amazing machine works and was put together.+The new EVN/JIVE software correlator was installed in February/March 2010 and has taken over all correlation in December 2012.  The original cluster has been extended several times and currently handles approximately 20 stations 
 +at 2Gbit/s in real-time e-VLBI mode and many more for traditional disk-based VLBI.  Here's some documentation on how this amazing machine works and was put together.
  
 === Usage === === Usage ===
  
-A very basic User's Manual for SFXC can be found [[http://www.jive.nl/~kettenis/sfxc.html|here]].+A very basic User's Manual for SFXC can be found [[sfxc-guide|here]]. 
 + 
 +We kindly request users of SFXC to reference our paper that describes its algorithm and implementation 
 +[[http://adsabs.harvard.edu/abs/2015ExA....39..259K|The SFXC software correlator for very long baseline interferometry: algorithms and implementation, A. Keimpema et. al., Experimental Astronomy, Volume 39, Issue 2, pp.259-279]].
  
 === SFXC software installation === === SFXC software installation ===
  
 The SFXC software correlator can be distributed under the terms of the General Public License (GPL). The SFXC software correlator can be distributed under the terms of the General Public License (GPL).
-We provide read-only access to the SFXC SVN code repository at [[https://svn.astron.nl/sfxc]].  The current production release is taken from the stable-3.1 branch which can be checked out using:+We provide read-only access to the SFXC SVN code repository at [[https://svn.astron.nl/sfxc]].  The current production release is taken from the stable-5.1 branch which can be checked out using:
  
-  svn checkout https://svn.astron.nl/sfxc/branches/stable-3.1+  svn checkout https://svn.astron.nl/sfxc/branches/stable-5.1
      
 In principle this branch will only receive bug fixes.  Development of new features happens on the trunk, which can be checked out using: In principle this branch will only receive bug fixes.  Development of new features happens on the trunk, which can be checked out using:
Line 61: Line 65:
 If you are building on a 64-bit (x86_64) system, you will also need: If you are building on a 64-bit (x86_64) system, you will also need:
  
-  * g++multilib+  * g++-multilib
   * gfortran-multilib   * gfortran-multilib
  
Line 94: Line 98:
 === GUI tools === === GUI tools ===
  
-SFXC comes with a couple of GUI tools to visualize the correlation results.+SFXC comes with a couple of GUI tools to visualize the correlation results.  These tools need the Python VEX parser module that can be found in the ''vex/'' top-level subdirectory.  This module uses a standard Python distutils setup.py, which means something like: 
 + 
 +  cd vex 
 +  python setup.py build 
 +  python setup.py install 
 +   
 +should be sufficient.  The last command will probably require root priviliges; setup.py offers a couple of alternative installation methods that avoid this.  More information on the VEX parser is provided in ''vex/README''
 + 
 +The GUI itself needs the Python Lex-Yacc 
  
 == Ubuntu 12.04 LTS == == Ubuntu 12.04 LTS ==
  
 +  * python-ply
 +  * python-qwt5-qt4
  
-python-qwt5-qt4+== Scientific Linux ==
  
 +No Python Lex-Yacc and PyQwt packages are provided by this Linux distribution.  Sorry, you're on your own!
 +
 +=== Post-processing software ===
 +
 +To convert the SFXC correlator output into FITS-IDI, additional tools are needed.  Information on how to obtain and build these tools is available at [[https://code.jive.eu/verkout/jive-casa]].
  
  
 === Cluster Description === === Cluster Description ===
  
-The cluster currently consists of ten Transtec Calleo 642 servers, each containing four nodes with a mix of dual-CPU quad-Core CPUs and octa-Core CPUs, for a grand total of 384 cores. The nodes are interconnected by QDR Infiniband (40Gb/s) and are also connected to a dedicated ethernet switch with dual 1Gb/s ethernet links per node . The 23 Mark5s at JIVE are connected to the same networking at 1Gb/s in order to play back diskpacks for correlation. There is a 36-port Infiniband switch, another 18-port Infiniband switch, and a head-node for central administration and NFS exported homedirectories.+The cluster currently consists of eleven Transtec Calleo 642 servers, each containing four nodes with a mix of dual-CPU quad-Core CPUs and octa-Core CPUs, for a grand total of 512 cores. The nodes are interconnected by QDR Infiniband (40Gb/s) and are also connected to a dedicated ethernet switch with dual 1Gb/s ethernet links or a single 10Gb/s ethert link per node. The 23 Mark5s at JIVE are connected to the same networking at 10Gb/s in order to play back diskpacks for correlation.  The 5 FlexBuffs are integrated in the cluster as well and use the same QDR Infiniband network as the nodes. There is a 36-port Infiniband switch, another 24-port Infiniband switch, and a head-node for central administration and NFS exported homedirectories.
  
 == Cluster Nodes == == Cluster Nodes ==
Line 114: Line 133:
   * 24 GB DDR-3 memory   * 24 GB DDR-3 memory
   * Two 1 TB disks (Seagate Barracuda ES.2 SATA-2 7200rpm)   * Two 1 TB disks (Seagate Barracuda ES.2 SATA-2 7200rpm)
-  * Dual 1 Gb/s (Intel 82576)+  * Dual 1 Gb/s Ethernet (Intel 82576)
   * Mellanox ConnectX QDR Infiniband (40 Gb/s)   * Mellanox ConnectX QDR Infiniband (40 Gb/s)
   * IPMI 2.0 management   * IPMI 2.0 management
Line 121: Line 140:
   * 24 GB DDR-3 memory   * 24 GB DDR-3 memory
   * Two 2 TB disks (Seagate Constellation ES SATA-2 7200rpm)   * Two 2 TB disks (Seagate Constellation ES SATA-2 7200rpm)
-  * Dual 1 Gb/s (Intel 82574L)+  * Dual 1 Gb/s Ethernet (Intel 82574L)
   * Mellanox ConnectX QDR Infiniband (40 Gb/s)   * Mellanox ConnectX QDR Infiniband (40 Gb/s)
   * IPMI 2.0 management   * IPMI 2.0 management
  
   * Dual Intel E5-2670 octa-core Xeon CPUs (2.60 GHz, 20 MB cache)   * Dual Intel E5-2670 octa-core Xeon CPUs (2.60 GHz, 20 MB cache)
-  * 32 GB DDR-3 memory+  * 64 GB DDR-3 memory
   * One 60 GB SSD (Intel 520 Series)   * One 60 GB SSD (Intel 520 Series)
-  * Dual 1 Gb/s (Intel I350)+  * Dual 1 Gb/s Ethernet (Intel I350)
   * Mellanox ConnectX QDR Infiniband (40 Gb/s)   * Mellanox ConnectX QDR Infiniband (40 Gb/s)
   * IPMI 2.0 management   * IPMI 2.0 management
 +
 +  * Dual Intel E5-2630 v3 octa-core Xeon CPUs (2.40 GHz, 20 MB cache)
 +  * 64 GB DDR-3 memory
 +  * One 60 GB SSD (Intel 520 Series)
 +  * Dual 10 Gb/s Ethernet (Intel X540-AT2)
 +  * QLogic IBA7322 QDR Infiniband (40 Gb/s)
 +  * IPMI 2.0 management
 +
 +== Output Node ==
 +
 +  * Dual Intel E5-2630 v2 hexa-core Xeon CPUs (2.60 GHz, 15 MB cache)
 +  * 32 GB DDR-3 memory
 +  * Four 3 TB disks (Seagate Constellation ES.3 SATA 6Gb/s 7200rpm)
 +  * Dual 10 Gb/s (Intel X540-AT2)
 +  * QLogic IBA7322 QDR Infiniband (40 Gb/s)
 +  * IMPMI 2.0 management
  
 == Head Node == == Head Node ==
Line 152: Line 187:
   * 2x J8712A 875W power supply   * 2x J8712A 875W power supply
 There is an additional 48 port Allied Telesys switch for connecting the IPMI ports.\\ There is an additional 48 port Allied Telesys switch for connecting the IPMI ports.\\
-The IP-range 10.88.1.xx(/24) is assigned to the cluster, with a subdomain of //sfxc.jive.nl//. 
  
  
sfxc.1385377785.txt.gz · Last modified: 2013/11/25 11:09 by kettenis