Grid
Operation
Checklists:
Op meetings:
Major conference:
CRRB Oct 2016-2019,
CHEP2021 (10-14 May), 2019 (4-8 Nov), 2018 (9-13 Jul), 2016 (8-14 Oct)
dCache workshop
Timeline
2023 Jun: Start transit from TWAREN/Internet2 (PacWave) to CERN/LHCONE and dual-stack start to work.
2023 Mar: Join OSG for IGWN computing via N-Cloud virtual cluster (~500 vcores, thanks to CNSu).
2022 Oct: Several old ASUS nodes and 5 EPYC nodes installed in K4. IPv6 deadline for T2.
2022 Mar: 1.2PB disk nodes in Taichung (Thanks to Spiraea).
2021 Nov: NTU storage installed (K4); Storage relocation (P4-south->L04); Expend C6 WNs.
2021 Jun: Transitting to Rucio (CERN) and ARGO monitoring service with ARC 6.12 (EGI)
2020 Feb: C7 cluster online; F5 decommissioned but keep the storage in Tainan.
2019 Jan : IBM1350A decommissioned and migrate to Formosa 5 and ARC+Condor CE
2017 Feb: NCU 240TB disk nodes installed
2016 Jan: WLCG CMS Tier-2 MoU signed
2015 Aug: MoST project(~800k+300k for two year), NTU 512TB disk array installed
2015 Feb-Jul: Phase-2 init project(480k) w/ 512-core. Join Belle-2 MC campaign.
2014 Nov-2015 Jan: Phase-I init project (50k) w/ 60-core @ IBM1350A
2013 Mar: Initial request from CMS community
Ref
WLCG networks: LHCOPN and LHCONE from INFN CCR Workshop - 27 May 2021
SONIC test updates by William Patrick Mccormack on CMS production meeting
關於格網計算
[for NCHC webpage]
格網計算(Grid computing)是整合分散式異質資源的計算環境。透過溝通介面的抽象化與標準化,計算資源與科研資料的分享更容易,成為目前許多國際科研計畫不可缺的高通量計算環境。約在2000年左右,全球大強子對撞機 (LHC) 計算格網 (WLCG) 逐漸成形,透過網際網路串聯起各大計算中心,成為一大型洲際科學計算設施。透過分散數十個國家、百餘個計算中心的計算資源,反覆處理粒子對撞實驗產生的大量資料,擴展知識邊界。中研院是亞洲的Tier-1 中心;國網中心也自2016年起,獲台灣大學與中央大學高能物理團隊的支持,正式加入WLCG成為 Tier2格網,目前可提供 CMS 實驗 Tier-2 約 0.5% 的計算與資料儲存。
高能物理格網的經驗已擴散到其他面臨大規模資料處理的領域:例如生物計算、材料計算、天文觀測分析等。科學研究與資通訊領域密切合作,彼此互補,相互提供應用場域與解決方案及技術。格網也逐步利用既有的高速計算或雲端計算資源來承載格網工作,加強閒置資源的調配,並提高整體使用率。進化中的格網,將持續在國內外科研上扮演重要角色。
The grid is a computing model that integrates heterogeneous and geographically distributed resources. Resources and data sharing among scientists are made easier with the abstraction and the establishment of a standard over the grid, which makes it an indispensable high-throughput computing environment that many scientific projects relied on. To deal with data from LHC (Large Hadron Collider) experiments, WLCG (worldwide LHC computing grid) was launched around 2000 as a global collaboration of hundreds of computing centers across dozens of countries [1]. Scientists repeatedly process the data and locate the tiny bit of evidence that is beyond the boundary of knowledge. Since 2005, Academia Sinica has been the WLCG Tier-1 and the regional operation center in the Asia-Pacific region. NCHC also begins to operate a CMS Tier-2 center since Jan 2016 with the support of the high-energy physics group in NTU and NCU, providing about 0.5% requirement of CMS Tier-2 computing.
Experiences of high energy physics grid have spread in the last decade over disciplines such as bioinformatics, material informatics, and astronomy that are facing daunting data challenges alike. Scientific research works in synergy with the development of ICT, providing novel use cases for cutting-edge technologies. To adopt more stringent requirements from the application side, the effort to improve flexibility and overall efficiency is never stopping via seeking the convergence of grid with the fast-growing applications in the arena of HPC and cloud. The constantly evolving grid will definitely continue to play an important role in the researches project.
[1] Report from the WLCG Technology Evolution Group in Data Management and Storage Management
[2] See this discussion about Jack Dongarra's viewpoint on the future HPC and Cloud.